vagrant-swift-all-in-one is being used and maintained by a number of swift
developers, it has an open source license.
The ansible playbook project serves a similar goal but it's based on a Fedora
distribution and includes Swift-on-File support.
Drive-by fix for the Swift-on-File link which has migrated to stackforge.
Change-Id: Id7478d58adcead57cf56ac4e1d05c6556c8c9b7b
Updated proxy-server.conf-sample with the correct default. Also
updated the note on the overview-auth doc page.
Change-Id: I5cd62a7a118a28f7b58f47b8d8d4d963f6bc7347
Bring docs in line with changes to auth_token config
defaults made in I7076fa03ab531cbb1114918f75113620b65590dc
Change-Id: Ia21685ebd1f3ed7bdba9de2ebac9fdcce8495949
This change modifies the swift-ring-builder and introduces new format
of sub-commands (search, list_parts, set_weight, set_info and remove)
in addition to add sub-command so that hostnames can be used in place
of an ip-address for the sub-commands.
The account reaper, container synchronizer, and replicators were also
updated so that they still have a way to identify a particular device
as being "local".
Previously this was Change-Id:
Ie471902413002872fc6755bacd36af3b9c613b74
Change-Id: Ieff583ffb932133e3820744a3f8f9f491686b08d
Co-Authored-By: Alex Pecoraro <alex.pecoraro@emc.com>
Implements: blueprint allow-hostnames-for-nodes-in-rings
To make it easier for Swift operators to specify problematic devices,
a policy index will be recorded in log files of proxy and storage servers
for each user request which is related to storage policy.
This patch simply adds 'storage_policy_index' field in a log format.
If there is no specified policy index, '-' is output in this field.
Extra fix: Doc about the log line of storage nodes now properly reflects
'server_pid' field.
DocImpact
Change-Id: I7286ae85bcbcec73b5377dc115cbdb0f57d1b025
Implements: blueprint logging-policy-number
Current behavior:
* If data/body is present in manifest file PUT request, the data/body gets
saved onto disk, just like for a normal object.
* Generally, this data in manifest file is never served on a GET response.
However, when the manifest object path itself is part of prefix, GET
response would contain data present in manifest file as well.
* The query param multipart-manifest=get meant to retrieve SLO manifest
also works in case of DLO manifest. Hence a COPY request with the
multipart-manifest=get query param would actually copy DLO manifest.
How things should have been:
* The DLO manifest object is supposed to have no content and only have
X-Object-Manifest metadata header.
* Query param multipart-manifest=get is SLO specific and shouldn't have
any role in DLO.
This change intends to only document current behaviour and not change it,
assuming there are users who have previously saved some content in DLO
manifest file and/or have been using multipart-manifest=get to fetch
and/or COPY the DLO manifest file with it's content.
Change-Id: I0f6e175ad7752169ecf94df949336e0665928df7
Signed-off-by: Prashanth Pai <ppai@redhat.com>
The way we do this now involves a conf change and a proxy
reload which is a pain. You can now just set these:
X-Account-Sysmeta-Global-Write-Ratelimit: WHITELIST
or
X-Account-Sysmeta-Global-Write-Ratelimit: BLACKLIST
NOTE:
The existing proxy config settings: account_whitelist
and account_blacklist will continue to work.
Change-Id: I532663f1d2c75d03170c5fdb9b330416822fbc88
Output a dispersion report that shows how many parts have each replica count
at each tier along with some additional context. Also the max_dispersion is a
good canary for what a reasonable overload might be.
Also display a warning on rebalance if the ring's dispersion is sub-optimal.
The primitive form of the dispersion graph is cached on the builder, but the
dispersion command will build it on the fly if you have a ring that was last
rebalanced before the change.
Also add --force option to rebalance to make it write a ring even if less than
1% of parts moved.
Try to clarify some dispersion and balance a little bit in the ring section of
the architectural overview.
Co-Authored-By: Christian Schwede <christian.schwede@enovance.com>
Co-Authored-By: Darrell Bishop <darrell@swiftstack.com>
Change-Id: I7696df25d092fac56588080722e0a4167ed2c824
The ring builder's placement algorithm has two goals: first, to ensure
that each partition has its replicas as far apart as possible, and
second, to ensure that partitions are fairly distributed according to
device weight. In many cases, it succeeds in both, but sometimes those
goals conflict. When that happens, operators may want to relax the
rules a little bit in order to reach a compromise solution.
Imagine a cluster of 3 nodes (A, B, C), each with 20 identical disks,
and using 3 replicas. The ring builder will place 1 replica of each
partition on each node, as you'd expect.
Now imagine that one disk fails in node C and is removed from the
ring. The operator would probably be okay with remaining at 1 replica
per node (unless their disks are really close to full), but to
accomplish that, they have to multiply the weights of the other disks
in node C by 20/19 to make C's total weight stay the same. Otherwise,
the ring builder will move partitions around such that some partitions
have replicas only on nodes A and B.
If 14 more disks failed in node C, the operator would probably be okay
with some data not living on C, as a 4x increase in storage
requirements is likely to fill disks.
This commit introduces the notion of "overload": how much extra
partition space can be placed on each disk *over* what the weight
dictates.
For example, an overload of 0.1 means that a device can take up to 10%
more partitions than its weight would imply in order to make the
replica dispersion better.
Overload only has an effect when replica-dispersion and device weights
come into conflict.
The overload is a single floating-point value for the builder
file. Existing builders get an overload of 0.0, so there will be no
behavior change on existing rings.
In the example above, imagine the operator sets an overload of 0.112
on his rings. If node C loses a drive, each other drive can take on up
to 11.2% more data. Splitting the dead drive's partitions among the
remaining 19 results in a 5.26% increase, so everything that was on
node C stays on node C. If another disk dies, then we're up to an
11.1% increase, and so everything still stays on node C. If a third
disk dies, then we've reached the limits of the overload, so some
partitions will begin to reside solely on nodes A and B.
DocImpact
Change-Id: I3593a1defcd63b6ed8eae9c1c66b9d3428b33864
After discussion https://review.openstack.org/#/c/129384/ moving
to the doc directory in swift repo.
This lets us eliminate the object-api repo along with all the <service>-
api repos and move content to audience-centric locations.
Change-Id: Ia0d9973847f7409a02dcc1a0e19400a3c3ecdf32
Instead of recommending to edit resetswift to replace "/dev/sdb1" with
"/srv/swift-disk", use an environment variable instead. This way I can
set SAIO_BLOCK_DEVICE=/srv/swift-disk in my .bashrc, and then when I'm
testing out changes to resetswift, I don't need to remember to edit
the modified script, nor do I end up submitting changes with the wrong
default in there.
The variable defaults to /dev/sdb1, so if you use the script unmodified
and don't set SAIO_BLOCK_DEVICE, nothing changes for you.
Change-Id: I741a8c91c2c54a4f32bc391cd794ef4206402753
Let's use the full project name to avoid confusion with the recently added
Swiftbrowser based on AngularJS.
Change-Id: Ib07338268a1593bc2882908b49c1fb4a130ff43d
Simply replacing a failed disk requires a very long time if the ring is not
changed, because all data will be replicated to a single new disk. This extends
the time to recover from missing replicas, and becomes even more important with
bigger disks.
This patch updates the doc to include a faster alternative by setting the weight
of a failed disk to 0. In this case the partitions from the failed disk are
distributed and replicated to the remaining disks in the cluster, and because
each disk gets only a fraction of the partitions it's also much faster.
Change-Id: I16617756359771ad89ca5d4690b58a014f481d9b
These docs currently say we target ubuntu 10.04 and an eventlet version
that our requirements file does not allow. Update these versions.
Change-Id: I052b6561f88ec90f865454e426032f1baf4586c0
The multi-node install was so horribly outdated it still
refered to ubuntu 10.04. The install guide at docs.openstack.org
has a usable swift installation guide - link to it until such
time as this page can be fixed.
Change-Id: I29fa334d9ffc9b63c8f31c664e7509b2f2577574
Cleanup and add clarification to the documentation
for using Keystone auth.
Update to refer to auth_token middleware being
distributed as part of the keystomemiddelware project
rather than keystone.
Include capabilities (/info) in the list of reasons
why delay_auth_decision might need to be set in
auth_token middleware config.
Add description of the project_id:user_id format
for container ACLs and emphasize that ids rather than
names should be used since this patch has now merged:
https://review.openstack.org/#/c/86430
DocImpact
blueprint keystone-v3-support
Change-Id: Idda4a3dcf8240474f1d2d163016ca2d40ec2d589
Remove intersphinx from the docs build as it triggers network calls that
occasionally fail, and we don't really use intersphinx (links other
sphinx documents out on the internet)
This also removes the requirement for internet access during docs build.
This can cause docs jobs to fail if the project errors out on
warnings.
Change-Id: I71e941e2a639641a662a163c682eb86d51de42fb
Related-Bug: #1368910
The keystoneauth middleware supports cross-tenant access
control using the syntax <tenant>:<user> in container ACLs,
where <tenant> and <user> may currently be either a unique
id or a name. As a result of the keystone v3 API introducing
domains, names are no longer globally unique and are only
unique within a domain. The use of unqualified tenant and
user names in this ACL syntax is therefore not 'safe' in a
keystone v3 environment.
This patch modifies keystoneauth to restrict cross-tenant
ACL matching to use only ids for accounts that are not in
the default domain. For backwards compatibility,
names will still be matched in ACLs when both the requesting
user and tenant are known to be in the default domain AND the
account's tenant is also in the default domain (the default
domain being the domain to which existing tenants are
migrated).
Accounts existing prior to this patch are assumed to be for
tenants in the default domain. New accounts created using a
v2 token scoped on the tenant are also assumed to be in the
default domain. New accounts created using a v3 token scoped
on the tenant will learn their domain membership from the
token info. New accounts created using any unscoped token,
(i.e. with a reselleradmin role) will have unknown domain
membership and therefore be assumed to NOT be in the default
domain.
Despite this provision for backwards compatibility, names
must no longer be used when setting new ACLs in any account,
including new accounts in the default domain.
This change obviously impacts users accustomed to specifying
cross-tenant ACLs in terms of names, and further work will be
necessary to restore those use cases. Some ideas are
discussed under the bug report. With that caveat, this patch
removes the reported vulnerability when using
swift/keystoneauth with a keystone v3 API.
Note: to observe the new 'restricted' behaviour you will need
to setup keystone user(s) and tenant(s) in a non-default domain
and set auth_version = v3.0 in the auth_token middleware config
section of proxy-server.conf. You may also benefit from the
keystone v3 enabled swiftclient patch under review here:
https://review.openstack.org/#/c/91788/
DocImpact
blueprint keystone-v3-support
Closes-Bug: #1299146
Change-Id: Ib32df093f7450f704127da77ff06b595f57615cb
Currently the theme used by the swift developer docs are out of
date, it should be using oslosphinx, to provide a similar look
and feel between all openstack related projects.
Change-Id: Id7c226cdc13c6c4f3b5082b1ef4dfe09966b21ec