Adjust the swift.yml and rpc_user_config.yml files for swift
* Add storage/replication network settings * Simplify out swift.yml * Clarify the variables Fixes #585
This commit is contained in:
parent
71068eafd4
commit
37816e934f
@ -1,103 +1,134 @@
|
|||||||
---
|
---
|
||||||
# Setup swift group variables when using swift (Not required if not using swift)
|
## Swift group variables are required only when using swift.
|
||||||
# part power is required under swift. This can't be changed once the ring is built
|
## Below is a sample configuration.
|
||||||
# For account/container speciying min_part_hours and repl_number is all that can be set.
|
##
|
||||||
# These 2 can be set at the "swift" level to work as a default.
|
## part_power value is required at the swift_level and cannot be changed once the ring has been built without removing the rings manually and rerunning the ring_builder.
|
||||||
# Alternatively defaults will be used (repl_number of 3, and min_part_hours of 1).
|
##
|
||||||
# For storage policies, a name and unique index is required as well as repl_number and
|
## The weight value is not required, and will default to 100 if not specified. This value will apply to all drives setup, but can be overriden on a drive or node basis by setting this value in the node or drive config.
|
||||||
# min_part_hours which will be set to a default value if not specified.
|
##
|
||||||
# There MUST be a storage policy with index 0 configured which will be the default for legacy containers (created pre-storage policies).
|
## The min_part_hours and repl_number values are not required, and will default to "1" and "3" respectively. Setting these at the swift level will apply this value as a default for all rings (including account/container). These can be overriden on a per ring basis by adjusting the value for account/container or specific storage_policy.
|
||||||
# You can set one policy to be "default: yes" this will be the default storage policy for non-legacy containers that are created.
|
##
|
||||||
# The index value must be unique.
|
## If you are using a storage_network specify the interface that the storage_network is setup on. If this value isn't specified the swift services will listen on the default management ip. NB If the storage_network isn't set but storage_ip's per host are set (or the storage_ip is not on the storage_network interface) the proxy server will not be able to connect to the storage services as this directly changes the IP address the storage hosts are listening on.
|
||||||
# Storage policies can be set to "deprecated: yes" which will mean they are not used
|
##
|
||||||
# You can specify a default mount_point to avoid having to specify it for each node.
|
## If you are using a dedicated replication network specify the interface that the storage_network is setup on. If this value isn't specified no dedicated replication_network will be set. As with the storage_network this impacts the IP that the replication service listens on, if the repl_ip isn't set on that interface replication will not work properly.
|
||||||
# This can be overridden by specifying it for a specific node.
|
##
|
||||||
# You can specify default drives in the global_overrides section, these drives will be used
|
## Set the default drives per host. This is useful when all hosts have the exact same drives. This can be overridden on a "per host" basis.
|
||||||
# if no other drives are specified per device. These work in the same way as the per node
|
##
|
||||||
# drives, so the same settings can be used.
|
## Set the default mount_point - which is the location where your swift drives are mounted. For example with a mount point of /mnt and a drive of sdc there should be a drive mounted at /mnt/sdc on the swift_host. This can be overriden on a per host basis if required.
|
||||||
|
##
|
||||||
|
## For account and container rings, min_part_hours and repl_number are the only values that can be set. Setting them here will override the defaults for the specific ring.
|
||||||
|
##
|
||||||
|
## Specify your storage_policies, there must be atleast one storage policy, and atleast one storage policy with index of 0 for legacy containers created before storage policies were instituted. At least one storage policy must have "default: True" set. The options that can be set for storage_policies are name (str), index (int), default (bool), deprecated (bool), repl_number (int) and min_part_hours (int) - with the last 2 overriding the default if specified.
|
||||||
|
##
|
||||||
|
|
||||||
# global_overrides:
|
# global_overrides:
|
||||||
# swift:
|
# swift:
|
||||||
# part_power: 8
|
# part_power: 8
|
||||||
# account:
|
# weight: 100
|
||||||
# repl_number: 3
|
# min_part_hours: 1
|
||||||
# min_part_hours: 1
|
# repl_number: 3
|
||||||
# container:
|
# storage_network: 'br-storage'
|
||||||
# repl_number: 3
|
# replication_network: 'br-repl'
|
||||||
# storage_policies:
|
|
||||||
# - policy:
|
|
||||||
# name: gold
|
|
||||||
# index: 0
|
|
||||||
# repl_number: 3
|
|
||||||
# default: yes
|
|
||||||
# - policy:
|
|
||||||
# name: silver
|
|
||||||
# index: 1
|
|
||||||
# repl_number: 2
|
|
||||||
# deprecated: yes
|
|
||||||
# mount_point: /mnt
|
|
||||||
# drives:
|
# drives:
|
||||||
# - name: sdb
|
|
||||||
# - name: sdc
|
# - name: sdc
|
||||||
|
# - name: sdd
|
||||||
|
# - name: sde
|
||||||
|
# - name: sdf
|
||||||
|
# mount_point: /mnt
|
||||||
|
# account:
|
||||||
|
# container:
|
||||||
|
# storage_policies:
|
||||||
|
# - policy:
|
||||||
|
# name: gold
|
||||||
|
# index: 0
|
||||||
|
# default: True
|
||||||
|
# - policy:
|
||||||
|
# name: silver
|
||||||
|
# index: 1
|
||||||
|
# repl_number: 3
|
||||||
|
# deprecated: True
|
||||||
|
|
||||||
|
## Specify the swift-proxy_hosts - these will typically be your infra nodes and are where your swift_proxy containers will be created.
|
||||||
|
## All that is required is the IP address of the host that ansible will connect to.
|
||||||
|
|
||||||
# User defined Swift Proxy hosts - not required when not using swift
|
|
||||||
# Will deploy a swift-proxy container on these hosts.
|
|
||||||
# Recommend mirroring the infra_hosts
|
|
||||||
# swift-proxy_hosts:
|
# swift-proxy_hosts:
|
||||||
# infra1:
|
# infra-node1:
|
||||||
# ip: 172.29.236.100
|
# ip: 192.0.2.1
|
||||||
# infra2:
|
# infra-node2:
|
||||||
# ip: 172.29.236.101
|
# ip: 192.0.2.2
|
||||||
# infra3:
|
# infra-node3:
|
||||||
# ip: 172.29.236.102
|
# ip: 192.0.2.3
|
||||||
|
|
||||||
|
## Specify the swift_hosts which will be the swift storage nodes.
|
||||||
|
##
|
||||||
|
## The ip is the address of the host that ansible will connect to.
|
||||||
|
##
|
||||||
|
## all swift settings are set under swift_vars.
|
||||||
|
##
|
||||||
|
## The storage_ip and repl_ip represent the IP that will go in the ring for storage and replication.
|
||||||
|
## E.g. for swift-node1 the IP string added to the ring would be 198.51.100.4:(service_port)R203.0.113.4:(service_port)
|
||||||
|
## If only the storage_ip is specified then the repl_ip will default to the storage_ip
|
||||||
|
## If only the repl_ip is specified then the storage_ip will default to the host ip above.
|
||||||
|
## If neither are specified both will default to the host ip above.
|
||||||
|
##
|
||||||
|
## zone and region can be specified for swift when building the ring.
|
||||||
|
##
|
||||||
|
## groups can be set to list which rings a host's drive should belong to. This can be set on a per drive basis which will override the host setting.
|
||||||
|
##
|
||||||
|
## swift-node5 is an example of overriding the values. Where the groups are set, and overridden on drive sdb. The weight is overriden for the host, and specifically adjusted on drive sdb, and the storage/repl_ip's are different for sdb.
|
||||||
|
##
|
||||||
|
|
||||||
# User defined Object Storage Hosts - this is not a required group
|
|
||||||
# Under swift_vars you can specify the host specific swift_vars.
|
|
||||||
# region - the swift region, this isn't required.
|
|
||||||
# zone - the swift zone, this isn't required either, will default to 0
|
|
||||||
# mount_point - where the drives are mounted on the server
|
|
||||||
# drives - A list of drives in the server (Must have a name as a minimum)
|
|
||||||
# Above 4 vars are "host specific"
|
|
||||||
# weight: a disks weight (defaults to 100 if not specified)
|
|
||||||
# repl_ip: IP specific for object replication (not required)
|
|
||||||
# repl_port: Port specific for object replication (not required)
|
|
||||||
# groups: A list of groups to add the drive to. A group is either a storage policy or the account or container servers. (If not specified defaults to all groups, so container/account/all storage policies).
|
|
||||||
# The above 4 can be specified on a per host or per drive basis
|
|
||||||
# Or both, in which case "per drive" will take precedence for the specific drive.
|
|
||||||
# ip can be specified in swift_vars to override the hosts ip
|
|
||||||
# or per drive to override all for that specific drive.
|
|
||||||
# swift_hosts:
|
# swift_hosts:
|
||||||
# object_storage1:
|
# swift-node1:
|
||||||
# ip: 172.29.236.108
|
# ip: 192.0.2.4
|
||||||
# container_vars:
|
# container_vars:
|
||||||
# swift_vars:
|
# swift_vars:
|
||||||
# region: 0
|
# storage_ip: 198.51.100.4
|
||||||
# zone: 0
|
# repl_ip: 203.0.113.4
|
||||||
# groups:
|
# zone: 0
|
||||||
# - silver
|
# swift-node2:
|
||||||
# - account
|
# ip: 192.0.2.5
|
||||||
# mount_point: /srv/node
|
# container_vars:
|
||||||
# drives:
|
# swift_vars:
|
||||||
# - name: sdb
|
# storage_ip: 198.51.100.5
|
||||||
# ip: 172.10.100.100
|
# repl_ip: 203.0.113.5
|
||||||
# repl_ip: 10.10.0.1
|
# zone: 1
|
||||||
# repl_port: 54321
|
# swift-node3:
|
||||||
# groups:
|
# ip: 192.0.2.6
|
||||||
# - gold
|
# container_vars:
|
||||||
# - account
|
# swift_vars:
|
||||||
# - container
|
# storage_ip: 198.51.100.6
|
||||||
# - name: sdc
|
# repl_ip: 203.0.113.6
|
||||||
# weight: 150
|
# zone: 2
|
||||||
# - name: sdd
|
# swift-node4:
|
||||||
# - name: sde
|
# ip: 192.0.2.7
|
||||||
#
|
# container_vars:
|
||||||
# object_storage2:
|
# swift_vars:
|
||||||
# ip: 172.29.236.109
|
# storage_ip: 198.51.100.7
|
||||||
# container_vars:
|
# repl_ip: 203.0.113.7
|
||||||
# swift_vars:
|
# zone: 3
|
||||||
# region: 0
|
# swift-node5:
|
||||||
# zone: 1
|
# ip: 192.0.2.8
|
||||||
# mount_point: /srv/node
|
# container_vars:
|
||||||
# drives:
|
# swift_vars:
|
||||||
# - name: sdb
|
# storage_ip: 198.51.100.8
|
||||||
# - name: sdc
|
# repl_ip: 203.0.113.8
|
||||||
|
# zone: 4
|
||||||
|
# region: 3
|
||||||
|
# weight: 200
|
||||||
|
# groups:
|
||||||
|
# - account
|
||||||
|
# - container
|
||||||
|
# - silver
|
||||||
|
# drives:
|
||||||
|
# - name: sdb
|
||||||
|
# storage_ip: 198.51.100.9
|
||||||
|
# repl_ip: 203.0.113.9
|
||||||
|
# weight: 75
|
||||||
|
# groups:
|
||||||
|
# - gold
|
||||||
|
# - name: sdc
|
||||||
|
# - name: sdd
|
||||||
|
# - name: sde
|
||||||
|
# - name: sdf
|
||||||
|
|
||||||
|
@ -74,6 +74,8 @@ global_overrides:
|
|||||||
- cinder_api
|
- cinder_api
|
||||||
- cinder_volume
|
- cinder_volume
|
||||||
- nova_compute
|
- nova_compute
|
||||||
|
# If you are using the storage network for swift_proxy add it to the group_binds
|
||||||
|
# - swift_proxy
|
||||||
type: "raw"
|
type: "raw"
|
||||||
container_bridge: "br-storage"
|
container_bridge: "br-storage"
|
||||||
container_interface: "eth2"
|
container_interface: "eth2"
|
||||||
|
@ -74,6 +74,8 @@ global_overrides:
|
|||||||
- cinder_api
|
- cinder_api
|
||||||
- cinder_volume
|
- cinder_volume
|
||||||
- nova_compute
|
- nova_compute
|
||||||
|
# If you are using the storage network for swift_proxy add it to the group_binds
|
||||||
|
# - swift_proxy
|
||||||
type: "raw"
|
type: "raw"
|
||||||
container_bridge: "br-storage"
|
container_bridge: "br-storage"
|
||||||
container_interface: "eth2"
|
container_interface: "eth2"
|
||||||
|
Loading…
Reference in New Issue
Block a user