Support for Ceph and Swift storage networks, and improvements to Swift
In a deployment that has both Ceph or Swift deployed it can be useful to seperate the network traffic. This change adds support for dedicated storage networks for both Ceph and Swift. By default, the storage hosts are attached to the following networks: * Overcloud admin network * Internal network * Storage network * Storage management network This adds four additional networks, which can be used to seperate the storage network traffic as follows: * Ceph storage network (ceph_storage_net_name) is used to carry Ceph storage data traffic. Defaults to the storage network (storage_net_name). * Ceph storage management network (ceph_storage_mgmt_net_name) is used to carry storage management traffic. Defaults to the storage management network (storage_mgmt_net_name). * Swift storage network (swift_storage_net_name) is used to carry Swift storage data traffic. Defaults to the storage network (storage_net_name). * Swift storage replication network (swift_storage_replication_net_name) is used to carry storage management traffic. Defaults to the storage management network (storage_mgmt_net_name). This change also includes several improvements to Swift device management and ring generation. The device management and ring generation are now separate, with device management occurring during 'kayobe overcloud host configure', and ring generation during a new command, 'kayobe overcloud swift rings generate'. For the device management, we now use standard Ansible modules rather than commands for device preparation. File system labels can be configured for each device individually. For ring generation, all commands are run on a single host, by default a host in the Swift storage group. A python script runs in one of the kolla Swift containers, which consumes an autogenerated YAML config file that defines the layout of the rings. Change-Id: Iedc7535532d706f02d710de69b422abf2f6fe54c
This commit is contained in:
parent
d27895d4ba
commit
6496cfc0ba
7
ansible/group_vars/all/ceph
Normal file
7
ansible/group_vars/all/ceph
Normal file
@ -0,0 +1,7 @@
|
||||
---
|
||||
###############################################################################
|
||||
# OpenStack Ceph configuration.
|
||||
|
||||
# Ansible host pattern matching hosts on which Ceph storage services
|
||||
# are deployed. The default is to use hosts in the 'storage' group.
|
||||
ceph_hosts: "storage"
|
@ -19,8 +19,9 @@ compute_default_network_interfaces: >
|
||||
{{ ([admin_oc_net_name,
|
||||
internal_net_name,
|
||||
storage_net_name,
|
||||
ceph_storage_net_name,
|
||||
tunnel_net_name] +
|
||||
(external_net_names if kolla_enable_neutron_provider_networks | bool else [])) | unique | list }}
|
||||
(external_net_names if kolla_enable_neutron_provider_networks | bool else [])) | reject('none') | unique | list }}
|
||||
|
||||
# List of extra networks to which compute nodes are attached.
|
||||
compute_extra_network_interfaces: []
|
||||
|
@ -25,7 +25,9 @@ controller_default_network_interfaces: >
|
||||
internal_net_name,
|
||||
storage_net_name,
|
||||
storage_mgmt_net_name,
|
||||
cleaning_net_name] | unique | list }}
|
||||
ceph_storage_net_name,
|
||||
swift_storage_net_name,
|
||||
cleaning_net_name] | reject('none') | unique | list }}
|
||||
|
||||
# List of extra networks to which controller nodes are attached.
|
||||
controller_extra_network_interfaces: []
|
||||
|
@ -49,6 +49,18 @@ storage_net_name: 'storage_net'
|
||||
# Name of the network used to carry storage management traffic.
|
||||
storage_mgmt_net_name: 'storage_mgmt_net'
|
||||
|
||||
# Name of the network used to carry ceph storage data traffic.
|
||||
ceph_storage_net_name: "{{ storage_net_name }}"
|
||||
|
||||
# Name of the network used to carry ceph storage management traffic.
|
||||
ceph_storage_mgmt_net_name: "{{ storage_mgmt_net_name }}"
|
||||
|
||||
# Name of the network used to carry swift storage data traffic.
|
||||
swift_storage_net_name: "{{ storage_net_name }}"
|
||||
|
||||
# Name of the network used to carry swift storage replication traffic.
|
||||
swift_storage_replication_net_name: "{{ storage_mgmt_net_name }}"
|
||||
|
||||
# Name of the network used to perform hardware introspection on the bare metal
|
||||
# workload hosts.
|
||||
inspection_net_name: 'inspection_net'
|
||||
|
@ -12,7 +12,15 @@ storage_bootstrap_user: "{{ lookup('env', 'USER') }}"
|
||||
# List of networks to which storage nodes are attached.
|
||||
storage_network_interfaces: >
|
||||
{{ (storage_default_network_interfaces +
|
||||
storage_extra_network_interfaces) | unique | list }}
|
||||
storage_extra_network_interfaces +
|
||||
([ceph_storage_net_name]
|
||||
if storage_needs_ceph_network else []) +
|
||||
([ceph_storage_mgmt_net_name]
|
||||
if storage_needs_ceph_mgmt_network else []) +
|
||||
([swift_storage_net_name]
|
||||
if storage_needs_swift_network else []) +
|
||||
([swift_storage_replication_net_name]
|
||||
if storage_needs_swift_replication_network else [])) | reject('none') | unique | list }}
|
||||
|
||||
# List of default networks to which storage nodes are attached.
|
||||
storage_default_network_interfaces: >
|
||||
@ -24,6 +32,24 @@ storage_default_network_interfaces: >
|
||||
# List of extra networks to which storage nodes are attached.
|
||||
storage_extra_network_interfaces: []
|
||||
|
||||
# Whether this host requires access to Ceph networks.
|
||||
storage_needs_ceph_network: >-
|
||||
{{ kolla_enable_ceph | bool and
|
||||
inventory_hostname in query('inventory_hostnames', ceph_hosts) }}
|
||||
|
||||
storage_needs_ceph_mgmt_network: >-
|
||||
{{ kolla_enable_ceph | bool and
|
||||
inventory_hostname in query('inventory_hostnames', ceph_hosts) }}
|
||||
|
||||
# Whether this host requires access to Swift networks.
|
||||
storage_needs_swift_network: >-
|
||||
{{ kolla_enable_swift | bool and
|
||||
inventory_hostname in query('inventory_hostnames', swift_hosts) }}
|
||||
|
||||
storage_needs_swift_replication_network: >-
|
||||
{{ kolla_enable_swift | bool and
|
||||
inventory_hostname in query('inventory_hostnames', swift_hosts) }}
|
||||
|
||||
###############################################################################
|
||||
# Storage node BIOS configuration.
|
||||
|
||||
|
67
ansible/group_vars/all/swift
Normal file
67
ansible/group_vars/all/swift
Normal file
@ -0,0 +1,67 @@
|
||||
---
|
||||
###############################################################################
|
||||
# OpenStack Swift configuration.
|
||||
|
||||
# Short name of the kolla container image used to build rings. Default is the
|
||||
# swift=object image.
|
||||
swift_ring_build_image_name: swift-object
|
||||
|
||||
# Full name of the kolla container image used to build rings.
|
||||
swift_ring_build_image: "{{ kolla_docker_registry ~ '/' if kolla_docker_registry else '' }}{{ kolla_docker_namespace }}/{{ kolla_base_distro }}-{{ kolla_install_type }}-{{ swift_ring_build_image_name }}:{{ kolla_openstack_release }}"
|
||||
|
||||
# Ansible host pattern matching hosts on which Swift object storage services
|
||||
# are deployed. The default is to use hosts in the 'storage' group.
|
||||
swift_hosts: "storage"
|
||||
|
||||
# Name of the host used to build Swift rings. Default is the first host of
|
||||
# 'swift_hosts'.
|
||||
swift_ring_build_host: "{{ query('inventory_hostnames', swift_hosts)[0] }}"
|
||||
|
||||
# ID of the Swift region for this host. Default is 1.
|
||||
swift_region: 1
|
||||
|
||||
# ID of the Swift zone. This can be set to different values for different hosts
|
||||
# to place them in different zones. Default is 0.
|
||||
swift_zone: 0
|
||||
|
||||
# Base-2 logarithm of the number of partitions.
|
||||
# i.e. num_partitions=2^<swift_part_power>. Default is 10.
|
||||
swift_part_power: 10
|
||||
|
||||
# Object replication count. Default is the smaller of the number of Swift
|
||||
# hosts, or 3.
|
||||
swift_replication_count: "{{ [query('inventory_hostnames', swift_hosts) | length, 3] | min }}"
|
||||
|
||||
# Minimum time in hours between moving a given partition. Default is 1.
|
||||
swift_min_part_hours: 1
|
||||
|
||||
# Ports on which Swift services listen. Default is:
|
||||
# object: 6000
|
||||
# account: 6001
|
||||
# container: 6002
|
||||
swift_service_ports:
|
||||
object: 6000
|
||||
account: 6001
|
||||
container: 6002
|
||||
|
||||
# List of block devices to use for Swift. Each item is a dict with the
|
||||
# following items:
|
||||
# - 'device': Block device path. Required.
|
||||
# - 'fs_label': Name of the label used to create the file system on the device.
|
||||
# Optional. Default is to use the basename of the device.
|
||||
# - 'services': List of services that will use this block device. Optional.
|
||||
# Default is 'swift_block_device_default_services'. Allowed items are
|
||||
# 'account', 'container', and 'object'.
|
||||
# - 'weight': Weight of the block device. Optional. Default is
|
||||
# 'swift_block_device_default_weight'.
|
||||
swift_block_devices: []
|
||||
|
||||
# Default weight to assign to block devices in the ring. Default is 100.
|
||||
swift_block_device_default_weight: 100
|
||||
|
||||
# Default list of services to assign block devices to. Allowed items are
|
||||
# 'account', 'container', and 'object'. Default value is all of these.
|
||||
swift_block_device_default_services:
|
||||
- account
|
||||
- container
|
||||
- object
|
@ -1,16 +0,0 @@
|
||||
---
|
||||
###############################################################################
|
||||
# OpenStack Swift configuration.
|
||||
|
||||
# Base-2 logarithm of the number of partitions.
|
||||
# i.e. num_partitions=2^<swift_part_power>.
|
||||
swift_part_power: 10
|
||||
|
||||
# Object replication count.
|
||||
swift_replication_count: "{{ [groups['controllers'] | length, 3] | min }}"
|
||||
|
||||
# Minimum time in hours between moving a given partition.
|
||||
swift_min_part_hours: 1
|
||||
|
||||
# Number of Swift Zones.
|
||||
swift_num_zones: 5
|
@ -28,6 +28,26 @@
|
||||
kolla_cluster_interface: "{{ storage_mgmt_net_name | net_interface | replace('-', '_') }}"
|
||||
when: storage_mgmt_net_name in network_interfaces
|
||||
|
||||
- name: Set Ceph storage network interface
|
||||
set_fact:
|
||||
kolla_ceph_storage_interface: "{{ ceph_storage_net_name | net_interface | replace('-', '_') }}"
|
||||
when: ceph_storage_net_name in network_interfaces
|
||||
|
||||
- name: Set Ceph cluster network interface
|
||||
set_fact:
|
||||
kolla_ceph_cluster_interface: "{{ ceph_storage_mgmt_net_name | net_interface | replace('-', '_') }}"
|
||||
when: ceph_storage_mgmt_net_name in network_interfaces
|
||||
|
||||
- name: Set Swift storage network interface
|
||||
set_fact:
|
||||
kolla_swift_storage_interface: "{{ swift_storage_net_name | net_interface | replace('-', '_') }}"
|
||||
when: swift_storage_net_name in network_interfaces
|
||||
|
||||
- name: Set Swift cluster network interface
|
||||
set_fact:
|
||||
kolla_swift_replication_interface: "{{ swift_storage_replication_net_name | net_interface | replace('-', '_') }}"
|
||||
when: swift_storage_replication_net_name in network_interfaces
|
||||
|
||||
- name: Set provision network interface
|
||||
set_fact:
|
||||
kolla_provision_interface: "{{ provision_wl_net_name | net_interface | replace('-', '_') }}"
|
||||
|
@ -109,6 +109,10 @@ kolla_overcloud_inventory_pass_through_host_vars:
|
||||
- "kolla_api_interface"
|
||||
- "kolla_storage_interface"
|
||||
- "kolla_cluster_interface"
|
||||
- "kolla_ceph_storage_interface"
|
||||
- "kolla_ceph_cluster_interface"
|
||||
- "kolla_swift_storage_interface"
|
||||
- "kolla_swift_replication_interface"
|
||||
- "kolla_provision_interface"
|
||||
- "kolla_inspector_dnsmasq_interface"
|
||||
- "kolla_dns_interface"
|
||||
@ -126,6 +130,10 @@ kolla_overcloud_inventory_pass_through_host_vars_map:
|
||||
kolla_api_interface: "api_interface"
|
||||
kolla_storage_interface: "storage_interface"
|
||||
kolla_cluster_interface: "cluster_interface"
|
||||
kolla_ceph_storage_interface: "ceph_storage_interface"
|
||||
kolla_ceph_cluster_interface: "ceph_cluster_interface"
|
||||
kolla_swift_storage_interface: "swift_storage_interface"
|
||||
kolla_swift_replication_interface: "swift_replication_interface"
|
||||
kolla_provision_interface: "provision_interface"
|
||||
kolla_inspector_dnsmasq_interface: "ironic_dnsmasq_interface"
|
||||
kolla_dns_interface: "dns_interface"
|
||||
|
@ -26,6 +26,10 @@
|
||||
kolla_provision_interface: "eth8"
|
||||
kolla_inspector_dnsmasq_interface: "eth9"
|
||||
kolla_tunnel_interface: "eth10"
|
||||
kolla_ceph_storage_interface: "eth11"
|
||||
kolla_ceph_cluster_interface: "eth12"
|
||||
kolla_swift_storage_interface: "eth13"
|
||||
kolla_swift_replication_interface: "eth14"
|
||||
|
||||
- name: Add a compute host to the inventory
|
||||
add_host:
|
||||
@ -38,6 +42,7 @@
|
||||
kolla_neutron_external_interfaces: "eth4,eth5"
|
||||
kolla_neutron_bridge_names: "br0,br1"
|
||||
kolla_tunnel_interface: "eth6"
|
||||
kolla_ceph_storage_interface: "eth7"
|
||||
|
||||
- name: Create a temporary directory
|
||||
tempfile:
|
||||
@ -321,6 +326,10 @@
|
||||
- kolla_external_vip_interface
|
||||
- storage_interface
|
||||
- cluster_interface
|
||||
- ceph_storage_interface
|
||||
- ceph_cluster_interface
|
||||
- swift_storage_interface
|
||||
- swift_replication_interface
|
||||
- provision_interface
|
||||
- ironic_dnsmasq_interface
|
||||
- dns_interface
|
||||
@ -456,6 +465,10 @@
|
||||
api_interface: "eth2"
|
||||
storage_interface: "eth3"
|
||||
cluster_interface: "eth4"
|
||||
ceph_storage_interface: "eth11"
|
||||
ceph_cluster_interface: "eth12"
|
||||
swift_storage_interface: "eth13"
|
||||
swift_replication_interface: "eth14"
|
||||
provision_interface: "eth8"
|
||||
ironic_dnsmasq_interface: "eth9"
|
||||
dns_interface: "eth5"
|
||||
@ -469,6 +482,7 @@
|
||||
network_interface: "eth0"
|
||||
api_interface: "eth2"
|
||||
storage_interface: "eth3"
|
||||
ceph_storage_interface: "eth7"
|
||||
tunnel_interface: "eth6"
|
||||
neutron_external_interface: "eth4,eth5"
|
||||
neutron_bridge_name: "br0,br1"
|
||||
|
@ -153,6 +153,14 @@ kolla_openstack_custom_config:
|
||||
dest: "{{ kolla_node_custom_config_path }}/swift"
|
||||
patterns: "*"
|
||||
enabled: "{{ kolla_enable_swift }}"
|
||||
untemplated:
|
||||
# These are binary files, and should not be templated.
|
||||
- account.builder
|
||||
- account.ring.gz
|
||||
- container.builder
|
||||
- container.ring.gz
|
||||
- object.builder
|
||||
- object.ring.gz
|
||||
# Zookeeper.
|
||||
- src: "{{ kolla_extra_config_path }}/zookeeper"
|
||||
dest: "{{ kolla_node_custom_config_path }}/zookeeper"
|
||||
|
11
ansible/roles/swift-block-devices/defaults/main.yml
Normal file
11
ansible/roles/swift-block-devices/defaults/main.yml
Normal file
@ -0,0 +1,11 @@
|
||||
---
|
||||
# Label used to create partitions. This is used by kolla-ansible to determine
|
||||
# which disks to mount.
|
||||
swift_block_devices_part_label: KOLLA_SWIFT_DATA
|
||||
|
||||
# List of block devices to use for Swift. Each item is a dict with the
|
||||
# following items:
|
||||
# - 'device': Block device path. Required.
|
||||
# - 'fs_label': Name of the label used to create the file system on the device.
|
||||
# Optional. Default is to use the basename of the device.
|
||||
swift_block_devices: []
|
72
ansible/roles/swift-block-devices/tasks/main.yml
Normal file
72
ansible/roles/swift-block-devices/tasks/main.yml
Normal file
@ -0,0 +1,72 @@
|
||||
---
|
||||
- name: Fail if swift_block_devices is not in the expected format
|
||||
fail:
|
||||
msg: >-
|
||||
Device {{ device_index }} in swift_block_devices is in an invalid format.
|
||||
Items should be a dict, containing at least a 'device' field.
|
||||
with_items: "{{ swift_block_devices }}"
|
||||
when: item is not mapping or 'device' not in item
|
||||
loop_control:
|
||||
index_var: device_index
|
||||
|
||||
- name: Ensure required packages are installed
|
||||
package:
|
||||
name: "{{ item }}"
|
||||
state: installed
|
||||
become: True
|
||||
when: swift_block_devices | length > 0
|
||||
with_items:
|
||||
- parted
|
||||
- xfsprogs
|
||||
|
||||
- name: Check the presence of a partition on the Swift block devices
|
||||
become: True
|
||||
parted:
|
||||
device: "{{ item.device }}"
|
||||
with_items: "{{ swift_block_devices }}"
|
||||
loop_control:
|
||||
label: "{{ item.device }}"
|
||||
register: swift_disk_info
|
||||
|
||||
- name: Fail if the Swift block devices have already a partition
|
||||
fail:
|
||||
msg: >
|
||||
The physical disk {{ item.item.device }} already has a partition.
|
||||
Ensure that each disk in 'swift_block_devices' does not have any
|
||||
partitions.
|
||||
with_items: "{{ swift_disk_info.results }}"
|
||||
when:
|
||||
- item.partitions | length > 0
|
||||
- item.partitions.0.name != swift_block_devices_part_label
|
||||
loop_control:
|
||||
label: "{{ item.item.device }}"
|
||||
|
||||
- name: Ensure partitions exist for Swift block device
|
||||
become: True
|
||||
parted:
|
||||
device: "{{ item.item.device }}"
|
||||
number: 1
|
||||
label: gpt
|
||||
name: "{{ swift_block_devices_part_label }}"
|
||||
state: present
|
||||
with_items: "{{ swift_disk_info.results }}"
|
||||
when: item.partitions | length == 0
|
||||
loop_control:
|
||||
label: "{{ item.item.device }}"
|
||||
|
||||
- name: Ensure Swift XFS file systems exist
|
||||
become: true
|
||||
filesystem:
|
||||
dev: "{{ partition_name }}"
|
||||
force: true
|
||||
fstype: xfs
|
||||
opts: "-L {{ fs_label }}"
|
||||
with_items: "{{ swift_disk_info.results }}"
|
||||
when: item.partitions | length == 0
|
||||
loop_control:
|
||||
label: "{{ device }}"
|
||||
index_var: index
|
||||
vars:
|
||||
device: "{{ item.item.device }}"
|
||||
partition_name: "{{ device }}{% if device.startswith('/dev/loop') %}p{% endif %}1"
|
||||
fs_label: "{{ item.item.fs_label | default(device | basename) }}"
|
13
ansible/roles/swift-block-devices/tests/main.yml
Normal file
13
ansible/roles/swift-block-devices/tests/main.yml
Normal file
@ -0,0 +1,13 @@
|
||||
---
|
||||
- include: test-invalid-format.yml
|
||||
- include: test-mount.yml
|
||||
- include: test-bootstrapped.yml
|
||||
|
||||
- hosts: localhost
|
||||
connection: local
|
||||
tasks:
|
||||
- name: Fail if any tests failed
|
||||
fail:
|
||||
msg: >
|
||||
Test failures: {{ test_failures }}
|
||||
when: test_failures is defined
|
@ -0,0 +1,68 @@
|
||||
---
|
||||
# Test case with one device that has already been partitioned.
|
||||
|
||||
- hosts: localhost
|
||||
connection: local
|
||||
tasks:
|
||||
- name: Allocate a temporary file for a fake device
|
||||
tempfile:
|
||||
register: tempfile
|
||||
|
||||
- name: Allocate a fake device file
|
||||
command: fallocate -l 32M {{ tempfile.path }}
|
||||
|
||||
- name: Find a free loopback device
|
||||
command: losetup -f
|
||||
register: loopback
|
||||
become: true
|
||||
|
||||
- name: Create a loopback device
|
||||
command: losetup {{ loopback.stdout }} {{ tempfile.path }}
|
||||
become: true
|
||||
|
||||
- name: Add a partition
|
||||
become: True
|
||||
parted:
|
||||
device: "{{ loopback.stdout }}"
|
||||
number: 1
|
||||
label: gpt
|
||||
name: KOLLA_SWIFT_DATA
|
||||
state: present
|
||||
|
||||
- block:
|
||||
- name: Test the swift-block-devices role
|
||||
include_role:
|
||||
name: ../../swift-block-devices
|
||||
vars:
|
||||
swift_block_devices:
|
||||
- device: "{{ loopback.stdout }}"
|
||||
|
||||
- name: Get name of fake partition
|
||||
parted:
|
||||
device: "{{ loopback.stdout }}"
|
||||
register: "disk_info"
|
||||
become: True
|
||||
|
||||
- name: Validate number of partition
|
||||
assert:
|
||||
that: disk_info.partitions | length == 1
|
||||
msg: >
|
||||
Number of partitions is not correct.
|
||||
|
||||
- name: Validate partition label is present
|
||||
assert:
|
||||
that: "disk_info.partitions.0.name == 'KOLLA_SWIFT_DATA'"
|
||||
msg: >
|
||||
Name of partition is not correct.
|
||||
|
||||
always:
|
||||
- name: Remove the fake file
|
||||
file:
|
||||
name: "{{ loopback.stdout }}"
|
||||
state: absent
|
||||
become: true
|
||||
|
||||
rescue:
|
||||
- name: Flag that a failure occurred
|
||||
set_fact:
|
||||
test_failures: "{{ test_failures | default(0) | int + 1 }}"
|
@ -0,0 +1,23 @@
|
||||
---
|
||||
# Test case with swift_block_devices in an invalid format.
|
||||
|
||||
- hosts: localhost
|
||||
connection: local
|
||||
tasks:
|
||||
- block:
|
||||
- name: Test the swift-block-devices role
|
||||
include_role:
|
||||
name: ../../swift-block-devices
|
||||
vars:
|
||||
swift_block_devices:
|
||||
- /dev/fake
|
||||
|
||||
rescue:
|
||||
- name: Flag that the error was raised
|
||||
set_fact:
|
||||
raised_error: true
|
||||
|
||||
- name: Flag that a failure occurred
|
||||
set_fact:
|
||||
test_failures: "{{ test_failures | default(0) | int + 1 }}"
|
||||
when: raised_error is not defined
|
60
ansible/roles/swift-block-devices/tests/test-mount.yml
Normal file
60
ansible/roles/swift-block-devices/tests/test-mount.yml
Normal file
@ -0,0 +1,60 @@
|
||||
---
|
||||
# Test case with one device that has not yet been tagged by kayobe with the
|
||||
# kolla-ansible bootstrap label.
|
||||
|
||||
- hosts: localhost
|
||||
connection: local
|
||||
tasks:
|
||||
- name: Allocate a temporary file for a fake device
|
||||
tempfile:
|
||||
register: tempfile
|
||||
|
||||
- name: Allocate a fake device file
|
||||
command: fallocate -l 32M {{ tempfile.path }}
|
||||
|
||||
- name: Find a free loopback device
|
||||
command: losetup -f
|
||||
register: loopback
|
||||
become: true
|
||||
|
||||
- name: Create a loopback device
|
||||
command: losetup {{ loopback.stdout }} {{ tempfile.path }}
|
||||
become: true
|
||||
|
||||
- block:
|
||||
- name: Test the swift-block-devices role
|
||||
include_role:
|
||||
name: ../../swift-block-devices
|
||||
vars:
|
||||
swift_block_devices:
|
||||
- device: "{{ loopback.stdout }}"
|
||||
|
||||
- name: Get name of fake partition
|
||||
parted:
|
||||
device: "{{ loopback.stdout }}"
|
||||
register: "disk_info"
|
||||
become: True
|
||||
|
||||
- name: Validate number of partition
|
||||
assert:
|
||||
that: disk_info.partitions | length == 1
|
||||
msg: >
|
||||
Number of partitions is not correct.
|
||||
|
||||
- name: Validate partition label is present
|
||||
assert:
|
||||
that: "disk_info.partitions.0.name == 'KOLLA_SWIFT_DATA'"
|
||||
msg: >
|
||||
Name of partition is not correct.
|
||||
|
||||
always:
|
||||
- name: Remove the fake file
|
||||
file:
|
||||
name: "{{ loopback.stdout }}"
|
||||
state: absent
|
||||
become: true
|
||||
|
||||
rescue:
|
||||
- name: Flag that a failure occurred
|
||||
set_fact:
|
||||
test_failures: "{{ test_failures | default(0) | int + 1 }}"
|
49
ansible/roles/swift-rings/defaults/main.yml
Normal file
49
ansible/roles/swift-rings/defaults/main.yml
Normal file
@ -0,0 +1,49 @@
|
||||
---
|
||||
# Host on which to build Swift rings.
|
||||
swift_ring_build_host:
|
||||
|
||||
# Path to Kayobe cnnfigutation for Swift.
|
||||
swift_config_path:
|
||||
|
||||
# Path in the container in which to build rings.
|
||||
swift_container_build_path: /tmp/swift-rings
|
||||
|
||||
# Path on the build host in which to store ring files temporarily.
|
||||
swift_ring_build_path: /tmp/swift-rings
|
||||
|
||||
# Docker image to use to build rings.
|
||||
swift_ring_build_image:
|
||||
|
||||
# Base-2 logarithm of the number of partitions.
|
||||
# i.e. num_partitions=2^<swift_part_power>.
|
||||
swift_part_power:
|
||||
|
||||
# Object replication count.
|
||||
swift_replication_count:
|
||||
|
||||
# Minimum time in hours between moving a given partition.
|
||||
swift_min_part_hours:
|
||||
|
||||
# List of configuration items for each host. Each item is a dict containing the
|
||||
# following fields:
|
||||
# - host: hostname
|
||||
# - region: Swift region
|
||||
# - zone: Swift zone
|
||||
# - ip: storage network IP address
|
||||
# - ports: dict of ports for the storage network
|
||||
# - replication_ip: replication network IP address
|
||||
# - replication_ports: dict of ports for the replication network
|
||||
# - block_devices: list of block devices to use for Swift. Each item is a dict with the
|
||||
# following items:
|
||||
# - 'device': Block device path. Required.
|
||||
# - 'fs_label': Name of the label used to create the file system on the device.
|
||||
# Optional. Default is to use the basename of the device.
|
||||
# - 'services': List of services that will use this block device. Optional.
|
||||
# Default is 'block_device_default_services' for this host.
|
||||
# - 'weight': Weight of the block device. Optional. Default is
|
||||
# 'block_device_default_weight' for this host.
|
||||
# - 'block_device_default_services': default list of services to assign block
|
||||
# devices on this host to.
|
||||
# - 'block_device_default_weight': default weight to assign to block devices on
|
||||
# this host.
|
||||
swift_host_config: []
|
130
ansible/roles/swift-rings/files/swift-ring-builder.py
Normal file
130
ansible/roles/swift-rings/files/swift-ring-builder.py
Normal file
@ -0,0 +1,130 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
"""
|
||||
Script to build a Swift ring from a declarative YAML configuration. This has
|
||||
been built via a script to avoid repeated 'docker exec' commands which could
|
||||
take a long time.
|
||||
|
||||
Usage:
|
||||
|
||||
python swift-ring-builder.py <config file path> <build path> <service name>
|
||||
|
||||
Example:
|
||||
|
||||
python swift-ring-builder.py /path/to/config.yml /path/to/builds object
|
||||
|
||||
Example configuration format:
|
||||
|
||||
---
|
||||
part_power: 10
|
||||
replication_count: 3
|
||||
min_part_hours: 1
|
||||
hosts:
|
||||
- host: swift1
|
||||
region: 1
|
||||
zone: 1
|
||||
ip: 10.0.0.1
|
||||
port: 6001
|
||||
replication_ip: 10.1.0.1
|
||||
replication_port: 6001
|
||||
devices:
|
||||
- device: /dev/sdb
|
||||
weight: 100
|
||||
- device: /dev/sdc
|
||||
weight: 100
|
||||
"""
|
||||
|
||||
from __future__ import print_function
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
import yaml
|
||||
|
||||
|
||||
class RingBuilder(object):
|
||||
"""Helper class for building Swift rings."""
|
||||
|
||||
def __init__(self, build_path, service_name):
|
||||
self.build_path = build_path
|
||||
self.service_name = service_name
|
||||
|
||||
def get_base_command(self):
|
||||
return [
|
||||
'swift-ring-builder',
|
||||
'%s/%s.builder' % (self.build_path, self.service_name),
|
||||
]
|
||||
|
||||
def create(self, part_power, replication_count, min_part_hours):
|
||||
cmd = self.get_base_command()
|
||||
cmd += [
|
||||
'create',
|
||||
"{}".format(part_power),
|
||||
"{}".format(replication_count),
|
||||
"{}".format(min_part_hours),
|
||||
]
|
||||
try:
|
||||
subprocess.check_call(cmd)
|
||||
except subprocess.CalledProcessError:
|
||||
print("Failed to create %s ring" % self.service_name)
|
||||
sys.exit(1)
|
||||
|
||||
def add_device(self, host, device):
|
||||
cmd = self.get_base_command()
|
||||
cmd += [
|
||||
'add',
|
||||
'--region', "{}".format(host['region']),
|
||||
'--zone', "{}".format(host['zone']),
|
||||
'--ip', host['ip'],
|
||||
'--port', "{}".format(host['port']),
|
||||
'--replication-ip', host['replication_ip'],
|
||||
'--replication-port', "{}".format(host['replication_port']),
|
||||
'--device', device['device'],
|
||||
'--weight', "{}".format(device['weight']),
|
||||
]
|
||||
try:
|
||||
subprocess.check_call(cmd)
|
||||
except subprocess.CalledProcessError:
|
||||
print("Failed to add device %s on host %s to %s ring" %
|
||||
(host['host'], device['device'], self.service_name))
|
||||
sys.exit(1)
|
||||
|
||||
def rebalance(self):
|
||||
cmd = self.get_base_command()
|
||||
cmd += [
|
||||
'rebalance',
|
||||
]
|
||||
try:
|
||||
subprocess.check_call(cmd)
|
||||
except subprocess.CalledProcessError:
|
||||
print("Failed to rebalance %s ring" % self.service_name)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def build_rings(config, build_path, service_name):
|
||||
builder = RingBuilder(build_path, service_name)
|
||||
builder.create(config['part_power'], config['replication_count'],
|
||||
config['min_part_hours'])
|
||||
for host in config['hosts']:
|
||||
devices = host['devices']
|
||||
# If no devices are present for this host, this will be None.
|
||||
if devices is None:
|
||||
continue
|
||||
for device in devices:
|
||||
builder.add_device(host, device)
|
||||
builder.rebalance()
|
||||
|
||||
|
||||
def main():
|
||||
if len(sys.argv) != 4:
|
||||
raise Exception("Usage: {0} <config file path> <build path> "
|
||||
"<service name>")
|
||||
config_path = sys.argv[1]
|
||||
build_path = sys.argv[2]
|
||||
service_name = sys.argv[3]
|
||||
with open(config_path) as f:
|
||||
config = yaml.load(f)
|
||||
build_rings(config, build_path, service_name)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
69
ansible/roles/swift-rings/tasks/main.yml
Normal file
69
ansible/roles/swift-rings/tasks/main.yml
Normal file
@ -0,0 +1,69 @@
|
||||
---
|
||||
# We generate a configuration file and execute a python script in a container
|
||||
# that builds a ring based on the config file contents. Doing it this way
|
||||
# avoids a large task loop with docker container for each step, which would be
|
||||
# quite slow.
|
||||
|
||||
# Execute the following commands on the ring build host.
|
||||
- block:
|
||||
# Facts required for ansible_user_uid and ansible_user_gid.
|
||||
- name: Gather facts for swift ring build host
|
||||
setup:
|
||||
|
||||
- name: Ensure Swift ring build directory exists
|
||||
file:
|
||||
path: "{{ swift_ring_build_path }}"
|
||||
state: directory
|
||||
|
||||
- name: Ensure Swift ring builder script exists
|
||||
copy:
|
||||
src: swift-ring-builder.py
|
||||
dest: "{{ swift_ring_build_path }}"
|
||||
|
||||
- name: Ensure Swift ring builder configuration exists
|
||||
template:
|
||||
src: swift-ring.yml.j2
|
||||
dest: "{{ swift_ring_build_path }}/{{ service_name }}-ring.yml"
|
||||
with_items: "{{ swift_service_names }}"
|
||||
loop_control:
|
||||
loop_var: service_name
|
||||
|
||||
- name: Ensure Swift rings exist
|
||||
docker_container:
|
||||
cleanup: true
|
||||
command: >-
|
||||
python {{ swift_container_build_path }}/swift-ring-builder.py
|
||||
{{ swift_container_build_path }}/{{ item }}-ring.yml
|
||||
{{ swift_container_build_path }}
|
||||
{{ item }}
|
||||
detach: false
|
||||
image: "{{ swift_ring_build_image }}"
|
||||
name: "swift_{{ item }}_ring_builder"
|
||||
user: "{{ ansible_user_uid }}:{{ ansible_user_gid }}"
|
||||
volumes:
|
||||
- "{{ swift_ring_build_path }}/:{{ swift_container_build_path }}/"
|
||||
with_items: "{{ swift_service_names }}"
|
||||
|
||||
- name: Ensure Swift ring files are copied
|
||||
fetch:
|
||||
src: "{{ swift_ring_build_path }}/{{ item[0] }}.{{ item[1] }}"
|
||||
dest: "{{ swift_config_path }}/{{ item[0] }}.{{ item[1] }}"
|
||||
flat: true
|
||||
mode: 0644
|
||||
with_nested:
|
||||
- "{{ swift_service_names }}"
|
||||
- - ring.gz
|
||||
- builder
|
||||
become: true
|
||||
|
||||
always:
|
||||
- name: Remove Swift ring build directory from build host
|
||||
file:
|
||||
path: "{{ swift_ring_build_path }}"
|
||||
state: absent
|
||||
|
||||
delegate_to: "{{ swift_ring_build_host }}"
|
||||
vars:
|
||||
# NOTE: Without this, the seed's ansible_host variable will not be
|
||||
# respected when using delegate_to.
|
||||
ansible_host: "{{ hostvars[swift_ring_build_host].ansible_host | default(swift_ring_build_host) }}"
|
21
ansible/roles/swift-rings/templates/swift-ring.yml.j2
Normal file
21
ansible/roles/swift-rings/templates/swift-ring.yml.j2
Normal file
@ -0,0 +1,21 @@
|
||||
---
|
||||
part_power: {{ swift_part_power }}
|
||||
replication_count: {{ swift_replication_count }}
|
||||
min_part_hours: {{ swift_min_part_hours }}
|
||||
hosts:
|
||||
{% for host_config in swift_host_config %}
|
||||
- host: {{ host_config.host }}
|
||||
region: {{ host_config.region }}
|
||||
zone: {{ host_config.zone }}
|
||||
ip: {{ host_config.ip }}
|
||||
port: {{ host_config.ports[service_name] }}
|
||||
replication_ip: {{ host_config.replication_ip }}
|
||||
replication_port: {{ host_config.replication_ports[service_name] }}
|
||||
devices:
|
||||
{% for device in host_config.block_devices %}
|
||||
{% if service_name in (device.services | default(host_config.block_device_default_services)) %}
|
||||
- device: {{ device.fs_label | default(device.device | basename) }}
|
||||
weight: {{ device.weight | default(host_config.block_device_default_weight) }}
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
{% endfor %}
|
@ -1,34 +0,0 @@
|
||||
---
|
||||
# List of names of block devices to use for Swift.
|
||||
swift_block_devices: []
|
||||
|
||||
# Docker image to use to build rings.
|
||||
swift_image:
|
||||
|
||||
# Host on which to build rings.
|
||||
swift_ring_build_host:
|
||||
|
||||
# Path in which to build ring files.
|
||||
swift_ring_build_path: /tmp/swift-rings
|
||||
|
||||
# Ports on which Swift services listen.
|
||||
swift_service_ports:
|
||||
object: 6000
|
||||
account: 6001
|
||||
container: 6002
|
||||
|
||||
# Base-2 logarithm of the number of partitions.
|
||||
# i.e. num_partitions=2^<swift_part_power>.
|
||||
swift_part_power:
|
||||
|
||||
# Object replication count.
|
||||
swift_replication_count:
|
||||
|
||||
# Minimum time in hours between moving a given partition.
|
||||
swift_min_part_hours:
|
||||
|
||||
# ID of the region for this Swift service.
|
||||
swift_region:
|
||||
|
||||
# ID of the zone for this Swift service.
|
||||
swift_zone:
|
@ -1,10 +0,0 @@
|
||||
---
|
||||
- name: Ensure Swift partitions exist
|
||||
command: parted /dev/{{ item }} -s -- mklabel gpt mkpart KOLLA_SWIFT_DATA 1 -1
|
||||
with_items: "{{ swift_block_devices }}"
|
||||
become: True
|
||||
|
||||
- name: Ensure Swift XFS file systems exist
|
||||
command: mkfs.xfs -f -L d{{ swift_block_devices.index(item) }} /dev/{{ item }}{% if item.startswith('loop') %}p{% endif %}1
|
||||
with_items: "{{ swift_block_devices }}"
|
||||
become: True
|
@ -1,3 +0,0 @@
|
||||
---
|
||||
- include_tasks: devices.yml
|
||||
- include_tasks: rings.yml
|
@ -1,75 +0,0 @@
|
||||
---
|
||||
- name: Ensure Swift ring build directory exists
|
||||
file:
|
||||
path: "{{ swift_ring_build_path }}"
|
||||
state: directory
|
||||
delegate_to: "{{ swift_ring_build_host }}"
|
||||
run_once: True
|
||||
|
||||
- name: Ensure Swift rings are created
|
||||
command: >
|
||||
docker run
|
||||
--rm
|
||||
-v {{ swift_ring_build_path }}/:{{ kolla_config_path }}/config/swift/
|
||||
{{ swift_image }}
|
||||
swift-ring-builder {{ kolla_config_path }}/config/swift/{{ item }}.builder create
|
||||
{{ swift_part_power }}
|
||||
{{ swift_replication_count }}
|
||||
{{ swift_min_part_hours }}
|
||||
with_items: "{{ swift_service_names }}"
|
||||
delegate_to: "{{ swift_ring_build_host }}"
|
||||
run_once: True
|
||||
|
||||
- name: Ensure devices are added to Swift rings
|
||||
command: >
|
||||
docker run
|
||||
--rm
|
||||
-v {{ swift_ring_build_path }}/:{{ kolla_config_path }}/config/swift/
|
||||
{{ swift_image }}
|
||||
swift-ring-builder {{ kolla_config_path }}/config/swift/{{ item[0] }}.builder add
|
||||
--region {{ swift_region }}
|
||||
--zone {{ swift_zone }}
|
||||
--ip {{ internal_net_name | net_ip }}
|
||||
--port {{ swift_service_ports[item[0]] }}
|
||||
--device {{ item[1] }}
|
||||
--weight 100
|
||||
with_nested:
|
||||
- "{{ swift_service_names }}"
|
||||
- "{{ swift_block_devices }}"
|
||||
delegate_to: "{{ swift_ring_build_host }}"
|
||||
|
||||
- name: Ensure Swift rings are rebalanced
|
||||
command: >
|
||||
docker run
|
||||
--rm
|
||||
-v {{ swift_ring_build_path }}/:{{ kolla_config_path }}/config/swift/
|
||||
{{ swift_image }}
|
||||
swift-ring-builder {{ kolla_config_path }}/config/swift/{{ item }}.builder rebalance
|
||||
with_items: "{{ swift_service_names }}"
|
||||
delegate_to: "{{ swift_ring_build_host }}"
|
||||
run_once: True
|
||||
|
||||
- name: Ensure Swift ring files are copied
|
||||
local_action:
|
||||
module: copy
|
||||
src: "{{ swift_ring_build_path }}/{{ item[0] }}.{{ item[1] }}"
|
||||
dest: "{{ kolla_config_path }}/config/swift/{{ item[0] }}.{{ item[1] }}"
|
||||
remote_src: True
|
||||
owner: "{{ ansible_user_uid }}"
|
||||
group: "{{ ansible_user_gid }}"
|
||||
mode: 0644
|
||||
with_nested:
|
||||
- "{{ swift_service_names }}"
|
||||
- - ring.gz
|
||||
- builder
|
||||
delegate_to: "{{ swift_ring_build_host }}"
|
||||
become: True
|
||||
run_once: True
|
||||
|
||||
- name: Remove Swift ring build directory from build host
|
||||
file:
|
||||
path: "{{ swift_ring_build_path }}"
|
||||
state: absent
|
||||
delegate_to: "{{ swift_ring_build_host }}"
|
||||
become: True
|
||||
run_once: True
|
11
ansible/swift-block-devices.yml
Normal file
11
ansible/swift-block-devices.yml
Normal file
@ -0,0 +1,11 @@
|
||||
---
|
||||
- name: Ensure Swift block devices are prepared
|
||||
hosts: "{{ swift_hosts }}"
|
||||
vars:
|
||||
swift_hosts: storage
|
||||
tags:
|
||||
- swift
|
||||
- swift-block-devices
|
||||
roles:
|
||||
- role: swift-block-devices
|
||||
when: kolla_enable_swift | bool
|
35
ansible/swift-rings.yml
Normal file
35
ansible/swift-rings.yml
Normal file
@ -0,0 +1,35 @@
|
||||
---
|
||||
- name: Ensure swift ring files exist
|
||||
hosts: localhost
|
||||
tags:
|
||||
- swift
|
||||
- swift-rings
|
||||
tasks:
|
||||
- name: Initialise a fact about swift hosts
|
||||
set_fact:
|
||||
swift_host_config: []
|
||||
|
||||
- name: Update a fact about Swift hosts
|
||||
set_fact:
|
||||
swift_host_config: "{{ swift_host_config + [swift_host] }}"
|
||||
vars:
|
||||
swift_host:
|
||||
host: "{{ host }}"
|
||||
region: "{{ hostvars[host]['swift_region'] }}"
|
||||
zone: "{{ hostvars[host]['swift_zone'] }}"
|
||||
ip: "{{ swift_storage_net_name | net_ip(inventory_hostname=host) }}"
|
||||
ports: "{{ hostvars[host]['swift_service_ports'] }}"
|
||||
replication_ip: "{{ swift_storage_replication_net_name | net_ip(inventory_hostname=host) }}"
|
||||
replication_ports: "{{ hostvars[host]['swift_service_ports'] }}"
|
||||
block_devices: "{{ hostvars[host]['swift_block_devices'] }}"
|
||||
block_device_default_services: "{{ hostvars[host]['swift_block_device_default_services'] }}"
|
||||
block_device_default_weight: "{{ hostvars[host]['swift_block_device_default_weight'] }}"
|
||||
with_inventory_hostnames: "{{ swift_hosts }}"
|
||||
loop_control:
|
||||
loop_var: host
|
||||
|
||||
- include_role:
|
||||
name: swift-rings
|
||||
vars:
|
||||
swift_config_path: "{{ kayobe_config_path }}/kolla/config/swift"
|
||||
when: kolla_enable_swift | bool
|
@ -1,13 +0,0 @@
|
||||
---
|
||||
- hosts: controllers
|
||||
tags:
|
||||
- swift
|
||||
roles:
|
||||
- role: swift-setup
|
||||
swift_image: "kolla/{{ kolla_base_distro }}-{{ kolla_install_type }}-swift-base:{{ kolla_openstack_release }}"
|
||||
swift_ring_build_host: "{{ groups['controllers'][0] }}"
|
||||
# ID of the region for this Swift service.
|
||||
swift_region: 1
|
||||
# ID of the zone for this Swift service.
|
||||
swift_zone: "{{ groups['controllers'].index(inventory_hostname) % swift_num_zones }}"
|
||||
when: kolla_enable_swift | bool
|
@ -475,6 +475,18 @@ Storage network (``storage_net_name``)
|
||||
Name of the network used to carry storage data traffic.
|
||||
Storage management network (``storage_mgmt_net_name``)
|
||||
Name of the network used to carry storage management traffic.
|
||||
Ceph storage network (``ceph_storage_net_name``)
|
||||
Name of the network used to carry Ceph storage data traffic.
|
||||
Defaults to the storage network (``storage_net_name``).
|
||||
Ceph storage management network (``ceph_storage_mgmt_net_name``)
|
||||
Name of the network used to carry storage management traffic.
|
||||
Defaults to the storage management network (``storage_mgmt_net_name``)
|
||||
Swift storage network (``swift_storage_net_name``)
|
||||
Name of the network used to carry Swift storage data traffic.
|
||||
Defaults to the storage network (``storage_net_name``).
|
||||
Swift storage replication network (``swift_storage_replication_net_name``)
|
||||
Name of the network used to carry storage management traffic.
|
||||
Defaults to the storage management network (``storage_mgmt_net_name``)
|
||||
Workload inspection network (``inspection_net_name``)
|
||||
Name of the network used to perform hardware introspection on the bare
|
||||
metal workload hosts.
|
||||
@ -501,6 +513,10 @@ To configure network roles in a system with two networks, ``example1`` and
|
||||
external_net_name: example2
|
||||
storage_net_name: example2
|
||||
storage_mgmt_net_name: example2
|
||||
ceph_storage_net_name: example2
|
||||
ceph_storage_mgmt_net_name: example2
|
||||
swift_storage_net_name: example2
|
||||
swift_replication_net_name: example2
|
||||
inspection_net_name: example2
|
||||
cleaning_net_name: example2
|
||||
|
||||
@ -733,6 +749,19 @@ a list of names of additional networks to attach. Alternatively, the list may
|
||||
be completely overridden by setting ``monitoring_network_interfaces``. These
|
||||
variables are found in ``${KAYOBE_CONFIG_PATH}/monitoring.yml``.
|
||||
|
||||
Storage Hosts
|
||||
-------------
|
||||
|
||||
By default, the storage hosts are attached to the following networks:
|
||||
|
||||
* overcloud admin network
|
||||
* internal network
|
||||
* storage network
|
||||
* storage management network
|
||||
|
||||
In addition, if Ceph or Swift is enabled, they can also be attached to the Ceph and Swift
|
||||
mangagment and replication networks.
|
||||
|
||||
Virtualised Compute Hosts
|
||||
-------------------------
|
||||
|
||||
|
@ -391,6 +391,18 @@ should be set to ``True``. To build images locally::
|
||||
If images have been built previously, they will not be rebuilt. To force
|
||||
rebuilding images, use the ``--force-rebuild`` argument.
|
||||
|
||||
Building Swift Rings
|
||||
--------------------
|
||||
|
||||
.. note::
|
||||
|
||||
This section can be skipped if Swift is not in use.
|
||||
|
||||
Swift uses ring files to control placement of data across a cluster. These
|
||||
files can be generated automatically using the following command::
|
||||
|
||||
(kayobe) $ kayobe overcloud swift rings generate
|
||||
|
||||
Deploying Containerised Services
|
||||
--------------------------------
|
||||
|
||||
|
7
etc/kayobe/ceph.yml
Normal file
7
etc/kayobe/ceph.yml
Normal file
@ -0,0 +1,7 @@
|
||||
---
|
||||
###############################################################################
|
||||
# OpenStack Ceph configuration.
|
||||
|
||||
# Ansible host pattern matching hosts on which Ceph storage services
|
||||
# are deployed. The default is to use hosts in the 'storage' group.
|
||||
#ceph_hosts:
|
@ -22,6 +22,11 @@
|
||||
# storage_net_bridge_ports:
|
||||
# storage_net_bond_slaves:
|
||||
|
||||
# Ceph storage network IP information.
|
||||
# ceph_storage_net_interface:
|
||||
# ceph_storage_net_bridge_ports:
|
||||
# ceph_storage_net_bond_slaves:
|
||||
|
||||
###############################################################################
|
||||
# Dummy variable to allow Ansible to accept this file.
|
||||
workaround_ansible_issue_8743: yes
|
||||
|
@ -32,6 +32,16 @@
|
||||
# storage_mgmt_net_bridge_ports:
|
||||
# storage_mgmt_net_bond_slaves:
|
||||
|
||||
# Storage network IP information.
|
||||
# ceph_storage_net_interface:
|
||||
# ceph_storage_net_bridge_ports:
|
||||
# ceph_storage_net_bond_slaves:
|
||||
|
||||
# Storage management network IP information.
|
||||
# swift_storage_net_interface:
|
||||
# swift_storage_net_bridge_ports:
|
||||
# swift_storage_net_bond_slaves:
|
||||
|
||||
###############################################################################
|
||||
# Dummy variable to allow Ansible to accept this file.
|
||||
workaround_ansible_issue_8743: yes
|
||||
|
47
etc/kayobe/inventory/group_vars/storage/network-interfaces
Normal file
47
etc/kayobe/inventory/group_vars/storage/network-interfaces
Normal file
@ -0,0 +1,47 @@
|
||||
---
|
||||
###############################################################################
|
||||
# Network interface definitions for the storage group.
|
||||
|
||||
# Overcloud provisioning network IP information.
|
||||
# provision_oc_net_interface:
|
||||
# provision_oc_net_bridge_ports:
|
||||
# provision_oc_net_bond_slaves:
|
||||
|
||||
# External network IP information.
|
||||
# external_net_interface:
|
||||
# external_net_bridge_ports:
|
||||
# external_net_bond_slaves:
|
||||
|
||||
# Storage network IP information.
|
||||
# storage_net_interface:
|
||||
# storage_net_bridge_ports:
|
||||
# storage_net_bond_slaves:
|
||||
|
||||
# Storage management network IP information.
|
||||
# storage_mgmt_net_interface:
|
||||
# storage_mgmt_net_bridge_ports:
|
||||
# storage_mgmt_net_bond_slaves:
|
||||
|
||||
# Ceph storage network IP information.
|
||||
# ceph_storage_net_interface:
|
||||
# ceph_storage_net_bridge_ports:
|
||||
# ceph_storage_net_bond_slaves:
|
||||
|
||||
# Ceph storage management network IP information.
|
||||
# ceph_storage_mgmt_net_interface:
|
||||
# ceph_storage_mgmt_net_bridge_ports:
|
||||
# ceph_storage_mgmt_net_bond_slaves:
|
||||
|
||||
# Swift storage network IP information.
|
||||
# swift_storage_net_interface:
|
||||
# swift_storage_net_bridge_ports:
|
||||
# swift_storage_net_bond_slaves:
|
||||
|
||||
# Swift storage management network IP information.
|
||||
# swift_storage_replication_net_interface:
|
||||
# swift_storage_replication_net_bridge_ports:
|
||||
# swift_storage_replication_net_bond_slaves:
|
||||
|
||||
###############################################################################
|
||||
# Dummy variable to allow Ansible to accept this file.
|
||||
workaround_ansible_issue_8743: yes
|
@ -45,6 +45,18 @@
|
||||
# Name of the network used to carry storage management traffic.
|
||||
#storage_mgmt_net_name:
|
||||
|
||||
# Name of the network used to carry ceph storage data traffic.
|
||||
#ceph_storage_net_name:
|
||||
|
||||
# Name of the network used to carry ceph storage management traffic.
|
||||
#ceph_storage_mgmt_net_name:
|
||||
|
||||
# Name of the network used to carry swift storage data traffic.
|
||||
#swift_storage_net_name:
|
||||
|
||||
# Name of the network used to carry swift storage replication traffic.
|
||||
#swift_storage_replication_net_name:
|
||||
|
||||
# Name of the network used to perform hardware introspection on the bare metal
|
||||
# workload hosts.
|
||||
#inspection_net_name:
|
||||
|
@ -18,6 +18,16 @@
|
||||
# List of extra networks to which storage nodes are attached.
|
||||
#storage_extra_network_interfaces:
|
||||
|
||||
# Whether this host requires access to Ceph networks.
|
||||
#storage_needs_ceph_network:
|
||||
|
||||
#storage_needs_ceph_mgmt_network:
|
||||
|
||||
# Whether this host requires access to Swift networks.
|
||||
#storage_needs_swift_network:
|
||||
|
||||
#storage_needs_swift_replication_network:
|
||||
|
||||
###############################################################################
|
||||
# Storage node BIOS configuration.
|
||||
|
||||
|
@ -2,18 +2,63 @@
|
||||
###############################################################################
|
||||
# OpenStack Swift configuration.
|
||||
|
||||
# Short name of the kolla container image used to build rings. Default is the
|
||||
# swift=object image.
|
||||
#swift_ring_build_image_name:
|
||||
|
||||
# Full name of the kolla container image used to build rings.
|
||||
#swift_ring_build_image:
|
||||
|
||||
# Ansible host pattern matching hosts on which Swift object storage services
|
||||
# are deployed. The default is to use hosts in the 'storage' group.
|
||||
#swift_hosts:
|
||||
|
||||
# Name of the host used to build Swift rings. Default is the first host of
|
||||
# 'swift_hosts'.
|
||||
#swift_ring_build_host:
|
||||
|
||||
# ID of the Swift region for this host. Default is 1.
|
||||
#swift_region:
|
||||
|
||||
# ID of the Swift zone. This can be set to different values for different hosts
|
||||
# to place them in different zones. Default is 0.
|
||||
#swift_zone:
|
||||
|
||||
# Base-2 logarithm of the number of partitions.
|
||||
# i.e. num_partitions=2^<swift_part_power>.
|
||||
# i.e. num_partitions=2^<swift_part_power>. Default is 10.
|
||||
#swift_part_power:
|
||||
|
||||
# Object replication count.
|
||||
# Object replication count. Default is the smaller of the number of Swift
|
||||
# hosts, or 3.
|
||||
#swift_replication_count:
|
||||
|
||||
# Minimum time in hours between moving a given partition.
|
||||
# Minimum time in hours between moving a given partition. Default is 1.
|
||||
#swift_min_part_hours:
|
||||
|
||||
# Number of Swift Zones.
|
||||
#swift_num_zones:
|
||||
# Ports on which Swift services listen. Default is:
|
||||
# object: 6000
|
||||
# account: 6001
|
||||
# container: 6002
|
||||
#swift_service_ports:
|
||||
|
||||
# List of block devices to use for Swift. Each item is a dict with the
|
||||
# following items:
|
||||
# - 'device': Block device path. Required.
|
||||
# - 'fs_label': Name of the label used to create the file system on the device.
|
||||
# Optional. Default is to use the basename of the device.
|
||||
# - 'services': List of services that will use this block device. Optional.
|
||||
# Default is 'swift_block_device_default_services'. Allowed items are
|
||||
# 'account', 'container', and 'object'.
|
||||
# - 'weight': Weight of the block device. Optional. Default is
|
||||
# 'swift_block_device_default_weight'.
|
||||
#swift_block_devices:
|
||||
|
||||
# Default weight to assign to block devices in the ring. Default is 100.
|
||||
#swift_block_device_default_weight:
|
||||
|
||||
# Default list of services to assign block devices to. Allowed items are
|
||||
# 'account', 'container', and 'object'. Default value is all of these.
|
||||
#swift_block_device_default_services:
|
||||
|
||||
###############################################################################
|
||||
# Dummy variable to allow Ansible to accept this file.
|
||||
|
@ -901,7 +901,7 @@ class OvercloudHostConfigure(KollaAnsibleMixin, KayobeAnsibleMixin, VaultMixin,
|
||||
# Further kayobe playbooks.
|
||||
playbooks = _build_playbook_list(
|
||||
"pip", "kolla-target-venv", "kolla-host",
|
||||
"docker", "ceph-block-devices")
|
||||
"docker", "ceph-block-devices", "swift-block-devices")
|
||||
self.run_kayobe_playbooks(parsed_args, playbooks,
|
||||
extra_vars=extra_vars, limit="overcloud")
|
||||
|
||||
@ -996,7 +996,8 @@ class OvercloudServiceConfigurationGenerate(KayobeAnsibleMixin,
|
||||
playbooks = _build_playbook_list("kolla-ansible")
|
||||
self.run_kayobe_playbooks(parsed_args, playbooks, tags="config")
|
||||
|
||||
playbooks = _build_playbook_list("kolla-openstack", "swift-setup")
|
||||
playbooks = _build_playbook_list("kolla-openstack")
|
||||
|
||||
self.run_kayobe_playbooks(parsed_args, playbooks)
|
||||
|
||||
# Run kolla-ansible prechecks before deployment.
|
||||
@ -1085,7 +1086,8 @@ class OvercloudServiceDeploy(KollaAnsibleMixin, KayobeAnsibleMixin, VaultMixin,
|
||||
playbooks = _build_playbook_list("kolla-ansible")
|
||||
self.run_kayobe_playbooks(parsed_args, playbooks, tags="config")
|
||||
|
||||
playbooks = _build_playbook_list("kolla-openstack", "swift-setup")
|
||||
playbooks = _build_playbook_list("kolla-openstack")
|
||||
|
||||
self.run_kayobe_playbooks(parsed_args, playbooks)
|
||||
|
||||
# Run kolla-ansible prechecks before deployment.
|
||||
@ -1142,7 +1144,8 @@ class OvercloudServiceReconfigure(KollaAnsibleMixin, KayobeAnsibleMixin,
|
||||
playbooks = _build_playbook_list("kolla-ansible")
|
||||
self.run_kayobe_playbooks(parsed_args, playbooks, tags="config")
|
||||
|
||||
playbooks = _build_playbook_list("kolla-openstack", "swift-setup")
|
||||
playbooks = _build_playbook_list("kolla-openstack")
|
||||
|
||||
self.run_kayobe_playbooks(parsed_args, playbooks)
|
||||
|
||||
# Run kolla-ansible prechecks before reconfiguration.
|
||||
@ -1353,6 +1356,15 @@ class OvercloudPostConfigure(KayobeAnsibleMixin, VaultMixin, Command):
|
||||
self.run_kayobe_playbooks(parsed_args, playbooks)
|
||||
|
||||
|
||||
class OvercloudSwiftRingsGenerate(KayobeAnsibleMixin, VaultMixin, Command):
|
||||
"""Generate Swift rings."""
|
||||
|
||||
def take_action(self, parsed_args):
|
||||
self.app.LOG.debug("Generating Swift rings")
|
||||
playbooks = _build_playbook_list("swift-rings")
|
||||
self.run_kayobe_playbooks(parsed_args, playbooks)
|
||||
|
||||
|
||||
class NetworkConnectivityCheck(KayobeAnsibleMixin, VaultMixin, Command):
|
||||
"""Check network connectivity between hosts in the control plane.
|
||||
|
||||
|
@ -1001,6 +1001,8 @@ class TestCase(unittest.TestCase):
|
||||
utils.get_data_files_path("ansible", "docker.yml"),
|
||||
utils.get_data_files_path(
|
||||
"ansible", "ceph-block-devices.yml"),
|
||||
utils.get_data_files_path(
|
||||
"ansible", "swift-block-devices.yml"),
|
||||
],
|
||||
limit="overcloud",
|
||||
extra_vars={"pip_applicable_users": [None]},
|
||||
@ -1425,6 +1427,26 @@ class TestCase(unittest.TestCase):
|
||||
]
|
||||
self.assertEqual(expected_calls, mock_run.call_args_list)
|
||||
|
||||
@mock.patch.object(commands.KayobeAnsibleMixin,
|
||||
"run_kayobe_playbooks")
|
||||
def test_overcloud_swift_rings_generate(self, mock_run):
|
||||
command = commands.OvercloudSwiftRingsGenerate(TestApp(), [])
|
||||
parser = command.get_parser("test")
|
||||
parsed_args = parser.parse_args([])
|
||||
|
||||
result = command.run(parsed_args)
|
||||
self.assertEqual(0, result)
|
||||
|
||||
expected_calls = [
|
||||
mock.call(
|
||||
mock.ANY,
|
||||
[
|
||||
utils.get_data_files_path("ansible", "swift-rings.yml"),
|
||||
],
|
||||
),
|
||||
]
|
||||
self.assertEqual(expected_calls, mock_run.call_args_list)
|
||||
|
||||
@mock.patch.object(commands.KayobeAnsibleMixin,
|
||||
"run_kayobe_playbooks")
|
||||
def test_baremetal_compute_inspect(self, mock_run):
|
||||
|
@ -0,0 +1,17 @@
|
||||
---
|
||||
features:
|
||||
- |
|
||||
Add support for seperate storage networks for both Ceph and Swift.
|
||||
This adds four additional networks, which can be used to seperate the storage
|
||||
network traffic as follows:
|
||||
|
||||
* Ceph storage network (ceph_storage_net_name) is used to carry Ceph storage
|
||||
data traffic. Defaults to the storage network (storage_net_name).
|
||||
* Ceph storage management network (ceph_storage_mgmt_net_name) is used to carry
|
||||
storage management traffic. Defaults to the storage management network
|
||||
(storage_mgmt_net_name).
|
||||
* Swift storage network (swift_storage_net_name) is used to carry Swift storage data
|
||||
traffic. Defaults to the storage network (storage_net_name).
|
||||
* Swift storage replication network (swift_storage_replication_net_name) is used to
|
||||
carry storage management traffic. Defaults to the storage management network
|
||||
(storage_mgmt_net_name).
|
15
releasenotes/notes/swift-improvements-07a2b75967f642e8.yaml
Normal file
15
releasenotes/notes/swift-improvements-07a2b75967f642e8.yaml
Normal file
@ -0,0 +1,15 @@
|
||||
---
|
||||
features:
|
||||
- |
|
||||
Improvements to Swift device management and ring generation.
|
||||
|
||||
The device management and ring generation are now separate, with device management
|
||||
occurring during 'kayobe overcloud host configure', and ring generation during a
|
||||
new command, 'kayobe overcloud swift rings generate'.
|
||||
|
||||
For the device management, we now use standard Ansible modules rather than commands
|
||||
for device preparation. File system labels can be configured for each device individually.
|
||||
|
||||
For ring generation, all commands are run on a single host, by default a host in the Swift
|
||||
storage group. A python script runs in one of the kolla Swift containers, which consumes
|
||||
an autogenerated YAML config file that defines the layout of the rings.
|
@ -70,6 +70,7 @@ kayobe.cli=
|
||||
overcloud_service_destroy = kayobe.cli.commands:OvercloudServiceDestroy
|
||||
overcloud_service_reconfigure = kayobe.cli.commands:OvercloudServiceReconfigure
|
||||
overcloud_service_upgrade = kayobe.cli.commands:OvercloudServiceUpgrade
|
||||
overcloud_swift_rings_generate = kayobe.cli.commands:OvercloudSwiftRingsGenerate
|
||||
physical_network_configure = kayobe.cli.commands:PhysicalNetworkConfigure
|
||||
playbook_run = kayobe.cli.commands:PlaybookRun
|
||||
seed_container_image_build = kayobe.cli.commands:SeedContainerImageBuild
|
||||
|
Loading…
Reference in New Issue
Block a user