* Split proxy server configurations into a separate file * Add description for no help options * Remove unnecessary RST markup backport: mitaka Change-Id: I77876f55dacf4710e4475952f96736f121336c01 Closes-Bug: #1605040
6.9 KiB
Configuration option = Default value | Description |
---|---|
account_autocreate = false |
If set to 'true' authorized accounts that do not yet exist within the Swift cluster will be automatically created. |
allow_account_management = false |
Whether account PUTs and DELETEs are even callable. |
auto_create_account_prefix = . |
Prefix to use when automatically creating accounts. |
client_chunk_size = 65536 |
Chunk size to read from clients. |
conn_timeout = 0.5 |
Connection timeout to external services. |
deny_host_headers = |
Comma separated list of Host headers to which the proxy will deny requests. |
error_suppression_interval = 60 |
Time in seconds that must elapse since the last error for a node to be considered no longer error limited. |
error_suppression_limit = 10 |
Error count to consider a node error limited. |
|
Log handoff requests if handoff logging is enabled and the handoff was not expected. We only log handoffs when we've pushed the handoff count further than we would normally have expected under normal circumstances, that is (request_node_count - num_primaries), when handoffs goes higher than that it means one of the primaries must have been skipped because of error limiting before we consumed all of our nodes_left. |
max_containers_per_account = 0 |
If set to a positive value, trying to create a container when the account already has at least this maximum containers will result in a 403 Forbidden. Note: This is a soft limit, meaning a user might exceed the cap for recheck_account_existence before the 403s kick in. |
max_containers_whitelist = |
is a comma separated list of account names that ignore the max_containers_per_account cap. |
node_timeout = 10 |
Request timeout to external services. |
object_chunk_size = 65536 |
Chunk size to read from object servers. |
object_post_as_copy = true |
Set object_post_as_copy = false to turn on fast posts where only the metadata changes are stored anew and the original data file is kept in place. This makes for quicker posts; but since the container metadata isn't updated in this mode, features like container sync won't be able to sync posts. |
post_quorum_timeout = 0.5 |
How long to wait for requests to finish after a quorum has been established. |
put_queue_depth = 10 |
Depth of the proxy put queue. |
|
Which backend servers to prefer on reads. Format is r<N> for region N or r<N>z<M> for region N, zone M. The value after the equals is the priority; lower numbers are higher priority. Example: first read from region 1 zone 1, then region 1 zone 2, then anything in region 2, then everything else: read_affinity = r1z1=100, r1z2=200, r2=300 Default is empty, meaning no preference. |
recheck_account_existence = 60 |
Cache timeout in seconds to send memcached for account existence. |
recheck_container_existence = 60 |
Cache timeout in seconds to send memcached for container existence. |
recoverable_node_timeout =
node_timeout |
Request timeout to external services for requests that, on failure, can be recovered from. For example, object GET. from a client external services. |
request_node_count = 2 * replicas |
replicas Set to the number of nodes to contact for a normal request. You can use '* replicas' at the end to have it use the number given times the number of replicas for the ring being used for the request. conf file for values will only be shown to the list of swift_owners. The exact default definition of a swift_owner is headers> up to the auth system in use, but usually indicates administrative responsibilities. paste.deploy to use for auth. To use tempauth set to: |
set log_address = /dev/log |
Location where syslog sends the logs to. |
set log_facility = LOG_LOCAL0 |
Syslog log facility. |
set log_level = INFO |
Log level. |
set log_name = proxy-server |
Label to use when logging. |
|
Storage nodes can be chosen at random (shuffle), by using timing measurements (timing), or by using an explicit match (affinity). Using timing measurements may allow for lower overall latency, while using affinity allows for finer control. In both the timing and affinity cases, equally-sorting nodes are still randomly chosen to spread load. The valid values for sorting_method are "affinity", "shuffle", or "timing". |
swift_owner_headers =
x-container-read, x-container-write, x-container-sync-key, x-container-sync-to, x-account-meta-temp-url-key, x-account-meta-temp-url-key-2, x-container-meta-temp-url-key, x-container-meta-temp-url-key-2, x-account-access-control |
These are the headers whose conf file for values will only be shown to the list of swift_owners. The exact default definition of a swift_owner is headers> up to the auth system in use, but usually indicates administrative responsibilities. paste.deploy to use for auth. To use tempauth set to: |
timing_expiry = 300 |
If the "timing" sorting_method is used, the timings will only be valid for the number of seconds configured by timing_expiry. |
use = egg:swift#proxy |
Entry point of paste.deploy in the server. |
write_affinity = r1, r2 |
This setting lets you trade data distribution for throughput. It makes the proxy server prefer local back-end servers for object PUT requests over non-local ones. Note that only object PUT requests are affected by the write_affinity setting; POST, GET, HEAD, DELETE, OPTIONS, and account/container PUT requests are not affected. The format is r<N> for region N or r<N>z<M> for region N, zone M. If this is set, then when handling an object PUT request, some number (see the write_affinity_node_count setting) of local backend servers will be tried before any nonlocal ones. Example: try to write to regions 1 and 2 before writing to any other nodes: write_affinity = r1, r2 |
write_affinity_node_count =
2 * replicas |
This setting is only useful in conjunction with write_affinity; it governs how many local object servers will be tried before falling back to non-local ones. You can use '* replicas' at the end to have it use the number given times the number of replicas for the ring being used for the request: write_affinity_node_count = 2 * replicas |