Adjust the swift.yml and rpc_user_config.yml files for swift
* Add storage/replication network settings * Simplify out swift.yml * Clarify the variables Fixes #585
This commit is contained in:
parent
71068eafd4
commit
37816e934f
@ -1,103 +1,134 @@
|
||||
---
|
||||
# Setup swift group variables when using swift (Not required if not using swift)
|
||||
# part power is required under swift. This can't be changed once the ring is built
|
||||
# For account/container speciying min_part_hours and repl_number is all that can be set.
|
||||
# These 2 can be set at the "swift" level to work as a default.
|
||||
# Alternatively defaults will be used (repl_number of 3, and min_part_hours of 1).
|
||||
# For storage policies, a name and unique index is required as well as repl_number and
|
||||
# min_part_hours which will be set to a default value if not specified.
|
||||
# There MUST be a storage policy with index 0 configured which will be the default for legacy containers (created pre-storage policies).
|
||||
# You can set one policy to be "default: yes" this will be the default storage policy for non-legacy containers that are created.
|
||||
# The index value must be unique.
|
||||
# Storage policies can be set to "deprecated: yes" which will mean they are not used
|
||||
# You can specify a default mount_point to avoid having to specify it for each node.
|
||||
# This can be overridden by specifying it for a specific node.
|
||||
# You can specify default drives in the global_overrides section, these drives will be used
|
||||
# if no other drives are specified per device. These work in the same way as the per node
|
||||
# drives, so the same settings can be used.
|
||||
## Swift group variables are required only when using swift.
|
||||
## Below is a sample configuration.
|
||||
##
|
||||
## part_power value is required at the swift_level and cannot be changed once the ring has been built without removing the rings manually and rerunning the ring_builder.
|
||||
##
|
||||
## The weight value is not required, and will default to 100 if not specified. This value will apply to all drives setup, but can be overriden on a drive or node basis by setting this value in the node or drive config.
|
||||
##
|
||||
## The min_part_hours and repl_number values are not required, and will default to "1" and "3" respectively. Setting these at the swift level will apply this value as a default for all rings (including account/container). These can be overriden on a per ring basis by adjusting the value for account/container or specific storage_policy.
|
||||
##
|
||||
## If you are using a storage_network specify the interface that the storage_network is setup on. If this value isn't specified the swift services will listen on the default management ip. NB If the storage_network isn't set but storage_ip's per host are set (or the storage_ip is not on the storage_network interface) the proxy server will not be able to connect to the storage services as this directly changes the IP address the storage hosts are listening on.
|
||||
##
|
||||
## If you are using a dedicated replication network specify the interface that the storage_network is setup on. If this value isn't specified no dedicated replication_network will be set. As with the storage_network this impacts the IP that the replication service listens on, if the repl_ip isn't set on that interface replication will not work properly.
|
||||
##
|
||||
## Set the default drives per host. This is useful when all hosts have the exact same drives. This can be overridden on a "per host" basis.
|
||||
##
|
||||
## Set the default mount_point - which is the location where your swift drives are mounted. For example with a mount point of /mnt and a drive of sdc there should be a drive mounted at /mnt/sdc on the swift_host. This can be overriden on a per host basis if required.
|
||||
##
|
||||
## For account and container rings, min_part_hours and repl_number are the only values that can be set. Setting them here will override the defaults for the specific ring.
|
||||
##
|
||||
## Specify your storage_policies, there must be atleast one storage policy, and atleast one storage policy with index of 0 for legacy containers created before storage policies were instituted. At least one storage policy must have "default: True" set. The options that can be set for storage_policies are name (str), index (int), default (bool), deprecated (bool), repl_number (int) and min_part_hours (int) - with the last 2 overriding the default if specified.
|
||||
##
|
||||
|
||||
# global_overrides:
|
||||
# swift:
|
||||
# part_power: 8
|
||||
# account:
|
||||
# repl_number: 3
|
||||
# min_part_hours: 1
|
||||
# container:
|
||||
# repl_number: 3
|
||||
# storage_policies:
|
||||
# - policy:
|
||||
# name: gold
|
||||
# index: 0
|
||||
# repl_number: 3
|
||||
# default: yes
|
||||
# - policy:
|
||||
# name: silver
|
||||
# index: 1
|
||||
# repl_number: 2
|
||||
# deprecated: yes
|
||||
# mount_point: /mnt
|
||||
# weight: 100
|
||||
# min_part_hours: 1
|
||||
# repl_number: 3
|
||||
# storage_network: 'br-storage'
|
||||
# replication_network: 'br-repl'
|
||||
# drives:
|
||||
# - name: sdb
|
||||
# - name: sdc
|
||||
# - name: sdd
|
||||
# - name: sde
|
||||
# - name: sdf
|
||||
# mount_point: /mnt
|
||||
# account:
|
||||
# container:
|
||||
# storage_policies:
|
||||
# - policy:
|
||||
# name: gold
|
||||
# index: 0
|
||||
# default: True
|
||||
# - policy:
|
||||
# name: silver
|
||||
# index: 1
|
||||
# repl_number: 3
|
||||
# deprecated: True
|
||||
|
||||
## Specify the swift-proxy_hosts - these will typically be your infra nodes and are where your swift_proxy containers will be created.
|
||||
## All that is required is the IP address of the host that ansible will connect to.
|
||||
|
||||
# User defined Swift Proxy hosts - not required when not using swift
|
||||
# Will deploy a swift-proxy container on these hosts.
|
||||
# Recommend mirroring the infra_hosts
|
||||
# swift-proxy_hosts:
|
||||
# infra1:
|
||||
# ip: 172.29.236.100
|
||||
# infra2:
|
||||
# ip: 172.29.236.101
|
||||
# infra3:
|
||||
# ip: 172.29.236.102
|
||||
# infra-node1:
|
||||
# ip: 192.0.2.1
|
||||
# infra-node2:
|
||||
# ip: 192.0.2.2
|
||||
# infra-node3:
|
||||
# ip: 192.0.2.3
|
||||
|
||||
## Specify the swift_hosts which will be the swift storage nodes.
|
||||
##
|
||||
## The ip is the address of the host that ansible will connect to.
|
||||
##
|
||||
## all swift settings are set under swift_vars.
|
||||
##
|
||||
## The storage_ip and repl_ip represent the IP that will go in the ring for storage and replication.
|
||||
## E.g. for swift-node1 the IP string added to the ring would be 198.51.100.4:(service_port)R203.0.113.4:(service_port)
|
||||
## If only the storage_ip is specified then the repl_ip will default to the storage_ip
|
||||
## If only the repl_ip is specified then the storage_ip will default to the host ip above.
|
||||
## If neither are specified both will default to the host ip above.
|
||||
##
|
||||
## zone and region can be specified for swift when building the ring.
|
||||
##
|
||||
## groups can be set to list which rings a host's drive should belong to. This can be set on a per drive basis which will override the host setting.
|
||||
##
|
||||
## swift-node5 is an example of overriding the values. Where the groups are set, and overridden on drive sdb. The weight is overriden for the host, and specifically adjusted on drive sdb, and the storage/repl_ip's are different for sdb.
|
||||
##
|
||||
|
||||
# User defined Object Storage Hosts - this is not a required group
|
||||
# Under swift_vars you can specify the host specific swift_vars.
|
||||
# region - the swift region, this isn't required.
|
||||
# zone - the swift zone, this isn't required either, will default to 0
|
||||
# mount_point - where the drives are mounted on the server
|
||||
# drives - A list of drives in the server (Must have a name as a minimum)
|
||||
# Above 4 vars are "host specific"
|
||||
# weight: a disks weight (defaults to 100 if not specified)
|
||||
# repl_ip: IP specific for object replication (not required)
|
||||
# repl_port: Port specific for object replication (not required)
|
||||
# groups: A list of groups to add the drive to. A group is either a storage policy or the account or container servers. (If not specified defaults to all groups, so container/account/all storage policies).
|
||||
# The above 4 can be specified on a per host or per drive basis
|
||||
# Or both, in which case "per drive" will take precedence for the specific drive.
|
||||
# ip can be specified in swift_vars to override the hosts ip
|
||||
# or per drive to override all for that specific drive.
|
||||
# swift_hosts:
|
||||
# object_storage1:
|
||||
# ip: 172.29.236.108
|
||||
# container_vars:
|
||||
# swift_vars:
|
||||
# region: 0
|
||||
# zone: 0
|
||||
# groups:
|
||||
# - silver
|
||||
# - account
|
||||
# mount_point: /srv/node
|
||||
# drives:
|
||||
# - name: sdb
|
||||
# ip: 172.10.100.100
|
||||
# repl_ip: 10.10.0.1
|
||||
# repl_port: 54321
|
||||
# groups:
|
||||
# - gold
|
||||
# - account
|
||||
# - container
|
||||
# - name: sdc
|
||||
# weight: 150
|
||||
# - name: sdd
|
||||
# - name: sde
|
||||
#
|
||||
# object_storage2:
|
||||
# ip: 172.29.236.109
|
||||
# container_vars:
|
||||
# swift_vars:
|
||||
# region: 0
|
||||
# zone: 1
|
||||
# mount_point: /srv/node
|
||||
# drives:
|
||||
# - name: sdb
|
||||
# - name: sdc
|
||||
# swift-node1:
|
||||
# ip: 192.0.2.4
|
||||
# container_vars:
|
||||
# swift_vars:
|
||||
# storage_ip: 198.51.100.4
|
||||
# repl_ip: 203.0.113.4
|
||||
# zone: 0
|
||||
# swift-node2:
|
||||
# ip: 192.0.2.5
|
||||
# container_vars:
|
||||
# swift_vars:
|
||||
# storage_ip: 198.51.100.5
|
||||
# repl_ip: 203.0.113.5
|
||||
# zone: 1
|
||||
# swift-node3:
|
||||
# ip: 192.0.2.6
|
||||
# container_vars:
|
||||
# swift_vars:
|
||||
# storage_ip: 198.51.100.6
|
||||
# repl_ip: 203.0.113.6
|
||||
# zone: 2
|
||||
# swift-node4:
|
||||
# ip: 192.0.2.7
|
||||
# container_vars:
|
||||
# swift_vars:
|
||||
# storage_ip: 198.51.100.7
|
||||
# repl_ip: 203.0.113.7
|
||||
# zone: 3
|
||||
# swift-node5:
|
||||
# ip: 192.0.2.8
|
||||
# container_vars:
|
||||
# swift_vars:
|
||||
# storage_ip: 198.51.100.8
|
||||
# repl_ip: 203.0.113.8
|
||||
# zone: 4
|
||||
# region: 3
|
||||
# weight: 200
|
||||
# groups:
|
||||
# - account
|
||||
# - container
|
||||
# - silver
|
||||
# drives:
|
||||
# - name: sdb
|
||||
# storage_ip: 198.51.100.9
|
||||
# repl_ip: 203.0.113.9
|
||||
# weight: 75
|
||||
# groups:
|
||||
# - gold
|
||||
# - name: sdc
|
||||
# - name: sdd
|
||||
# - name: sde
|
||||
# - name: sdf
|
||||
|
||||
|
@ -74,6 +74,8 @@ global_overrides:
|
||||
- cinder_api
|
||||
- cinder_volume
|
||||
- nova_compute
|
||||
# If you are using the storage network for swift_proxy add it to the group_binds
|
||||
# - swift_proxy
|
||||
type: "raw"
|
||||
container_bridge: "br-storage"
|
||||
container_interface: "eth2"
|
||||
|
@ -74,6 +74,8 @@ global_overrides:
|
||||
- cinder_api
|
||||
- cinder_volume
|
||||
- nova_compute
|
||||
# If you are using the storage network for swift_proxy add it to the group_binds
|
||||
# - swift_proxy
|
||||
type: "raw"
|
||||
container_bridge: "br-storage"
|
||||
container_interface: "eth2"
|
||||
|
Loading…
Reference in New Issue
Block a user