Background: Install Guide commands use /srv/node mount point for /etc/fstab updates, however example configuration settings for swift.yml use /mnt. A new deployer, or one that doesn't follow all of the instructions closely may use /mnt for the configuration settings and /srv/node for /etc/fstab updates. This results in an unuseable swift subsystem. Change: Update install guide docs, and swift.yml.example to utilize /srv/node as the mount point example, consistent with the install guide example /etc/fstab updates. Closes-Bug: 1582043 Change-Id: Ic64d36c7481fb4fdb6122d8578a0a6cf45e6b978
12 KiB
Home OpenStack-Ansible Installation Guide
Configuring the service
Procedure 5.2. Updating the Object Storage configuration ``swift.yml`` file
Copy the
/etc/openstack_deploy/conf.d/swift.yml.examplefile to/etc/openstack_deploy/conf.d/swift.yml:# cp /etc/openstack_deploy/conf.d/swift.yml.example \ /etc/openstack_deploy/conf.d/swift.ymlUpdate the global override values:
# global_overrides: # swift: # part_power: 8 # weight: 100 # min_part_hours: 1 # repl_number: 3 # storage_network: 'br-storage' # replication_network: 'br-repl' # drives: # - name: sdc # - name: sdd # - name: sde # - name: sdf # mount_point: /srv/node # account: # container: # storage_policies: # - policy: # name: gold # index: 0 # default: True # - policy: # name: silver # index: 1 # repl_number: 3 # deprecated: True # statsd_host: statsd.example.com # statsd_port: 8125 # statsd_metric_prefix: # statsd_default_sample_rate: 1.0 # statsd_sample_rate_factor: 1.0part_power-
Set the partition power value based on the total amount of storage the entire ring uses.
Multiply the maximum number of drives ever used with the swift installation by 100 and round that value up to the closest power of two value. For example, a maximum of six drives, times 100, equals 600. The nearest power of two above 600 is two to the power of nine, so the partition power is nine. The partition power cannot be changed after the swift rings are built.
weight-
The default weight is 100. If the drives are different sizes, set the weight value to avoid uneven distribution of data. For example, a 1 TB disk would have a weight of 100, while a 2 TB drive would have a weight of 200.
min_part_hours-
The default value is 1. Set the minimum partition hours to the amount of time to lock a partition's replicas after moving a partition. Moving multiple replicas at the same time makes data inaccessible. This value can be set separately in the swift, container, account, and policy sections with the value in lower sections superseding the value in the swift section.
repl_number-
The default value is 3. Set the replication number to the number of replicas of each object. This value can be set separately in the swift, container, account, and policy sections with the value in the more granular sections superseding the value in the swift section.
storage_network-
By default, the swift services listen on the default management IP. Optionally, specify the interface of the storage network.
If the
storage_networkis not set, but thestorage_ipsper host are set (or thestorage_ipis not on thestorage_networkinterface) the proxy server is unable to connect to the storage services. replication_network-
Optionally, specify a dedicated replication network interface, so dedicated replication can be setup. If this value is not specified, no dedicated
replication_networkis set.Replication does not work properly if the
repl_ipis not set on thereplication_networkinterface. drives-
Set the default drives per host. This is useful when all hosts have the same drives. These can be overridden on a per host basis.
mount_point-
Set the
mount_pointvalue to the location where the swift drives are mounted. For example, with a mount point of/srv/nodeand a drive ofsdc, a drive is mounted at/srv/node/sdcon theswift_host. This can be overridden on a per-host basis. storage_policies-
Storage policies determine on which hardware data is stored, how the data is stored across that hardware, and in which region the data resides. Each storage policy must have an unique
nameand a uniqueindex. There must be a storage policy with an index of 0 in theswift.ymlfile to use any legacy containers created before storage policies were instituted. default-
Set the default value to
yesfor at least one policy. This is the default storage policy for any non-legacy containers that are created. deprecated-
Set the deprecated value to
yesto turn off storage policies.For account and container rings,
min_part_hoursandrepl_numberare the only values that can be set. Setting them in this section overrides the defaults for the specific ring. statsd_host-
Swift supports sending extra metrics to a
statsdhost. This option sets thestatsdhost to receivestatsdmetrics. Specifying this here applies to all hosts in the cluster.If
statsd_hostis left blank or omitted, thenstatsdare disabled.All
statsdsettings can be overridden or you can specify deeper in the structure if you want to only catchstatsdvmetrics on certain hosts. statsd_port-
Optionally, use this to specify the
statsdserver's port you are sending metrics to. Defaults to 8125 of omitted. statsd_default_sample_rateandstatsd_sample_rate_factor-
These
statsdrelated options are more complex and are used to tune how many samples are sent tostatsd. Omit them unless you need to tweak these settings, if so first read: http://docs.openstack.org/developer/swift/admin_guide.html
Update the swift proxy hosts values:
# swift-proxy_hosts: # infra-node1: # ip: 192.0.2.1 # statsd_metric_prefix: proxy01 # infra-node2: # ip: 192.0.2.2 # statsd_metric_prefix: proxy02 # infra-node3: # ip: 192.0.2.3 # statsd_metric_prefix: proxy03swift-proxy_hosts-
Set the
IPaddress of the hosts so Ansible connects to to deploy theswift-proxycontainers. Theswift-proxy_hostsvalue matches the infra nodes.
statsd_metric_prefixThis metric is optional, and also only evaluated it you have defined
statsd_hostsomewhere. It allows you define a prefix to add to allstatsdmetrics sent from this hose. If omitted, use the node name.
Update the swift hosts values:
# swift_hosts: # swift-node1: # ip: 192.0.2.4 # container_vars: # swift_vars: # zone: 0 # statsd_metric_prefix: node1 # swift-node2: # ip: 192.0.2.5 # container_vars: # swift_vars: # zone: 1 # statsd_metric_prefix: node2 # swift-node3: # ip: 192.0.2.6 # container_vars: # swift_vars: # zone: 2 # statsd_metric_prefix: node3 # swift-node4: # ip: 192.0.2.7 # container_vars: # swift_vars: # zone: 3 # swift-node5: # ip: 192.0.2.8 # container_vars: # swift_vars: # storage_ip: 198.51.100.8 # repl_ip: 203.0.113.8 # zone: 4 # region: 3 # weight: 200 # groups: # - account # - container # - silver # drives: # - name: sdb # storage_ip: 198.51.100.9 # repl_ip: 203.0.113.9 # weight: 75 # groups: # - gold # - name: sdc # - name: sdd # - name: sde # - name: sdfswift_hosts-
Specify the hosts to be used as the storage nodes. The
ipis the address of the host to which Ansible connects. Set the name and IP address of each swift host. Theswift_hostssection is not required. swift_vars-
Contains the swift host specific values.
storage_ipandrepl_ip-
Base these values on the IP addresses of the host's
storage_networkorreplication_network. For example, if thestorage_networkisbr-storageand host1 has an IP address of 1.1.1.1 onbr-storage, then this is the IP address in use forstorage_ip. If only thestorage_ipis specified, then therepl_ipdefaults to thestorage_ip. If neither are specified, both default to the host IP address.Overriding these values on a host or drive basis can cause problems if the IP address that the service listens on is based on a specified
storage_networkorreplication_networkand the ring is set to a different IP address. zone-
The default is 0. Optionally, set the swift zone for the ring.
region-
Optionally, set the swift region for the ring.
weight-
The default weight is 100. If the drives are different sizes, set the weight value to avoid uneven distribution of data. This value can be specified on a host or drive basis (if specified at both, the drive setting takes precedence).
groups-
Set the groups to list the rings to which a host's drive belongs. This can be set on a per drive basis which overrides the host setting.
drives-
Set the names of the drives on the swift host. Specify at least one name.
statsd_metric_prefixThis metric is optional, and only evaluates if
statsd_hostis defined somewhere. This allows you to define a prefix to add to allstatsdmetrics sent from the hose. If omitted, use the node name.In the following example,
swift-node5shows values in theswift_hostssection that override the global values. Groups are set, which overrides the global settings for drivesdb. The weight is overridden for the host and specifically adjusted on drivesdb. Also, thestorage_ipandrepl_ipare set differently forsdb.# swift-node5: # ip: 192.0.2.8 # container_vars: # swift_vars: # storage_ip: 198.51.100.8 # repl_ip: 203.0.113.8 # zone: 4 # region: 3 # weight: 200 # groups: # - account # - container # - silver # drives: # - name: sdb # storage_ip: 198.51.100.9 # repl_ip: 203.0.113.9 # weight: 75 # groups: # - gold # - name: sdc # - name: sdd # - name: sde # - name: sdf
- Ensure the
swift.ymlis in the/etc/openstack_deploy/conf.d/folder.