Allow multiple VIPs per LB

User can specify additional subnet_id/ip_address pairs to bring up on
the VIP port. This will allow for situations like having an LB with both
IPv4+IPv6 or being exposed on both public and a private network.

For UDP/SCTP loadbalancers, mixing IPv4 VIP and IPv6 members is not
supported (IPv6 VIP and IPv4 members as well). It's still possible to
use IPv4 and IPv6 VIPs at the same time in the same loadbalancer but an
IPv4 VIP can only communicate with IPv4 members.

Thanks Michael for help with validating/fixing the templates!
Thanks Gregory for help with the centos networking!

WIP: Remove some debugging, tempest tests, improve keepalivedlvs test
     coverage

Co-Authored-By: Michael Johnson <johnsomor@gmail.com>
Co-Authored-By: Gregory Thiemonge <gthiemon@redhat.com>
Co-Authored-By: Brian Haley <bhaley@redhat.com>
Story: 2005608
Task: 30847
Change-Id: Id7153dbf33b9616d7af685fcf13ad9a79793c06b
This commit is contained in:
Adam Harwell 2019-05-20 18:14:17 -07:00 committed by Gregory Thiemonge
parent 599c8dd33a
commit c47a055f55
89 changed files with 2612 additions and 732 deletions

View File

@ -135,6 +135,17 @@ active_connections:
in: body
required: true
type: integer
additional_vips:
description: |
A list of JSON objects defining "additional VIPs". The format for these
is ``{"subnet_id": <subnet_id>, "ip_address": <ip_address>}``, where
the ``subnet_id`` field is mandatory and the ``ip_address`` field is
optional. Additional VIP subnets must all belong to the same network as
the primary VIP.
in: body
required: false
type: array
min_version: 2.17
address:
description: |
The IP address of the resource.
@ -1511,14 +1522,14 @@ tags:
in: body
min_version: 2.5
required: true
type: list
type: array
tags-optional:
description: |
A list of simple strings assigned to the resource.
in: body
min_version: 2.5
required: false
type: list
type: array
timeout_client_data:
description: |
Frontend client inactivity timeout in milliseconds. Default: 50000.

View File

@ -1 +1 @@
curl -X POST -H "Content-Type: application/json" -H "X-Auth-Token: <token>" -d '{"loadbalancer": {"description": "My favorite load balancer", "admin_state_up": true, "project_id": "e3cd678b11784734bc366148aa37580e", "flavor_id": "a7ae5d5a-d855-4f9a-b187-af66b53f4d04", "vip_subnet_id": "d4af86e1-0051-488c-b7a0-527f97490c9a", "vip_address": "203.0.113.50", "provider": "octavia", "name": "best_load_balancer", "vip_qos_policy_id": "ec4f78ca-8da8-4e99-8a1a-e3b94595a7a3", "availability_zone": "my_az", "tags": ["test_tag"]}}' http://198.51.100.10:9876/v2/lbaas/loadbalancers
curl -X POST -H "Content-Type: application/json" -H "X-Auth-Token: <token>" -d '{"loadbalancer": {"description": "My favorite load balancer", "admin_state_up": true, "project_id": "e3cd678b11784734bc366148aa37580e", "flavor_id": "a7ae5d5a-d855-4f9a-b187-af66b53f4d04", "vip_subnet_id": "d4af86e1-0051-488c-b7a0-527f97490c9a", "vip_address": "203.0.113.50", "additional_vips": [{"subnet_id": "3ca40b2e-c286-4e53-bdb9-dd01c8a0ad6d", "ip_address": "2001:db8::b33f"}, {"subnet_id": "44d92b92-510f-4c05-8058-bf5a17b4d41c"}], "provider": "octavia", "name": "best_load_balancer", "vip_qos_policy_id": "ec4f78ca-8da8-4e99-8a1a-e3b94595a7a3", "availability_zone": "my_az", "tags": ["test_tag"]}}' http://198.51.100.10:9876/v2/lbaas/loadbalancers

View File

@ -5,6 +5,10 @@
"project_id": "e3cd678b11784734bc366148aa37580e",
"vip_subnet_id": "d4af86e1-0051-488c-b7a0-527f97490c9a",
"vip_address": "203.0.113.50",
"additional_vips": [
{"subnet_id": "3ca40b2e-c286-4e53-bdb9-dd01c8a0ad6d", "ip_address": "2001:db8::b33f"},
{"subnet_id": "44d92b92-510f-4c05-8058-bf5a17b4d41c"}
],
"provider": "octavia",
"name": "best_load_balancer",
"vip_qos_policy_id": "ec4f78ca-8da8-4e99-8a1a-e3b94595a7a3",

View File

@ -9,6 +9,10 @@
"vip_address": "203.0.113.50",
"vip_network_id": "d0d217df-3958-4fbf-a3c2-8dad2908c709",
"vip_port_id": "b4ca07d1-a31e-43e2-891a-7d14f419f342",
"additional_vips": [
{"subnet_id": "3ca40b2e-c286-4e53-bdb9-dd01c8a0ad6d", "ip_address": "2001:db8::b33f"},
{"subnet_id": "44d92b92-510f-4c05-8058-bf5a17b4d41c", "ip_address": "198.51.100.4"}
],
"provider": "octavia",
"created_at": "2017-02-28T00:41:44",
"updated_at": "2017-02-28T00:43:30",

View File

@ -74,6 +74,7 @@
"vip_address": "203.0.113.50",
"vip_network_id": "d0d217df-3958-4fbf-a3c2-8dad2908c709",
"vip_port_id": "b4ca07d1-a31e-43e2-891a-7d14f419f342",
"additional_vips": [],
"provider": "octavia",
"pools": [
{

View File

@ -9,6 +9,7 @@
"vip_address": "203.0.113.50",
"vip_network_id": "d0d217df-3958-4fbf-a3c2-8dad2908c709",
"vip_port_id": "b4ca07d1-a31e-43e2-891a-7d14f419f342",
"additional_vips": [],
"provider": "octavia",
"created_at": "2017-02-28T00:41:44",
"updated_at": "2017-02-28T00:43:30",

View File

@ -9,6 +9,7 @@
"vip_address": "203.0.113.50",
"vip_network_id": "d0d217df-3958-4fbf-a3c2-8dad2908c709",
"vip_port_id": "b4ca07d1-a31e-43e2-891a-7d14f419f342",
"additional_vips": [],
"provider": "octavia",
"created_at": "2017-02-28T00:41:44",
"updated_at": "2017-02-28T00:43:30",

View File

@ -15,6 +15,7 @@
"vip_address": "203.0.113.50",
"vip_network_id": "d0d217df-3958-4fbf-a3c2-8dad2908c709",
"vip_port_id": "b4ca07d1-a31e-43e2-891a-7d14f419f342",
"additional_vips": [],
"provider": "octavia",
"pools": [
{

View File

@ -45,6 +45,7 @@ Response Parameters
.. rest_parameters:: ../parameters.yaml
- additional_vips: additional_vips
- admin_state_up: admin_state_up
- availability_zone: availability-zone-name
- created_at: created_at
@ -145,6 +146,11 @@ a VIP network for the load balancer:
octavia will attempt to allocate the ``vip_address`` from the subnet for
the VIP address.
Additional VIPs may also be specified in the ``additional_vips`` field, by
providing a list of JSON objects containing a ``subnet_id`` and optionally
an ``ip_address``. All additional subnets must be part of the same network
as the primary VIP.
.. rest_status_code:: success ../http-status.yaml
- 201
@ -163,6 +169,7 @@ Request
.. rest_parameters:: ../parameters.yaml
- additional_vips: additional_vips
- admin_state_up: admin_state_up-default-optional
- availability_zone: availability-zone-name-optional
- description: description-optional
@ -196,6 +203,7 @@ Response Parameters
.. rest_parameters:: ../parameters.yaml
- additional_vips: additional_vips
- admin_state_up: admin_state_up
- availability_zone: availability-zone-name
- created_at: created_at
@ -290,6 +298,7 @@ Response Parameters
.. rest_parameters:: ../parameters.yaml
- additional_vips: additional_vips
- admin_state_up: admin_state_up
- availability_zone: availability-zone-name
- created_at: created_at
@ -377,6 +386,7 @@ Response Parameters
.. rest_parameters:: ../parameters.yaml
- additional_vips: additional_vips
- admin_state_up: admin_state_up
- created_at: created_at
- description: description

View File

@ -8,14 +8,15 @@ sysctl-write-value net.ipv4.tcp_max_orphans 5800000
sysctl-write-value net.ipv4.tcp_max_syn_backlog 100000
sysctl-write-value net.ipv4.tcp_keepalive_time 300
sysctl-write-value net.ipv4.tcp_tw_reuse 1
sysctl-write-value net.core.somaxconn 65534
sysctl-write-value net.core.somaxconn 65534 # netns aware
sysctl-write-value net.ipv4.tcp_synack_retries 3
sysctl-write-value net.core.netdev_max_backlog 100000
# This should allow HAProxy maxconn to be 1,000,000
sysctl-write-value fs.file-max 2600000
sysctl-write-value fs.nr_open 2600000
sysctl-write-value fs.file-max 2600000 # netns aware
sysctl-write-value fs.nr_open 2600000 # netns aware
# It's ok for these to fail if conntrack module isn't loaded
sysctl-write-value net.netfilter.nf_conntrack_buckets 125000 || true # netns aware
sysctl-write-value net.netfilter.nf_conntrack_tcp_timeout_time_wait 5 || true
sysctl-write-value net.netfilter.nf_conntrack_tcp_timeout_fin_wait 5 || true

View File

@ -63,19 +63,13 @@ class BaseOS(object):
)
interface.write()
def write_vip_interface_file(self, interface, vip, ip_version,
prefixlen, gateway,
mtu, vrrp_ip,
host_routes, fixed_ips=None):
def write_vip_interface_file(self, interface, vips, mtu, vrrp_info,
fixed_ips=None):
vip_interface = interface_file.VIPInterfaceFile(
name=interface,
mtu=mtu,
vip=vip,
ip_version=ip_version,
prefixlen=prefixlen,
gateway=gateway,
vrrp_ip=vrrp_ip,
host_routes=host_routes,
vips=vips,
vrrp_info=vrrp_info,
fixed_ips=fixed_ips,
topology=CONF.controller_worker.loadbalancer_topology)
vip_interface.write()

View File

@ -17,7 +17,6 @@ import ipaddress
import os
import socket
import stat
import subprocess
from oslo_config import cfg
from oslo_log import log as logging
@ -43,18 +42,62 @@ class Plug(object):
ip_address="127.0.0.1",
prefixlen=8)
def render_vips(self, vips):
rendered_vips = []
for vip in vips:
ip_address = ipaddress.ip_address(vip['ip_address'])
subnet_cidr = ipaddress.ip_network(vip['subnet_cidr'])
prefixlen = subnet_cidr.prefixlen
host_routes = vip['host_routes']
gateway = vip['gateway']
rendered_vips.append({
'ip_address': ip_address.exploded,
'ip_version': ip_address.version,
'gateway': gateway,
'host_routes': host_routes,
'prefixlen': prefixlen
})
return rendered_vips
def build_vrrp_info(self, vrrp_ip, subnet_cidr, gateway, host_routes):
vrrp_info = {}
if vrrp_ip:
ip_address = ipaddress.ip_address(vrrp_ip)
subnet_cidr = ipaddress.ip_network(subnet_cidr)
prefixlen = subnet_cidr.prefixlen
vrrp_info.update({
'ip': ip_address.exploded,
'ip_version': ip_address.version,
'gateway': gateway,
'host_routes': host_routes,
'prefixlen': prefixlen
})
return vrrp_info
def plug_vip(self, vip, subnet_cidr, gateway,
mac_address, mtu=None, vrrp_ip=None, host_routes=None):
# Validate vip and subnet_cidr, calculate broadcast address and netmask
mac_address, mtu=None, vrrp_ip=None, host_routes=(),
additional_vips=()):
vips = [{
'ip_address': vip,
'subnet_cidr': subnet_cidr,
'gateway': gateway,
'host_routes': host_routes
}] + list(additional_vips)
try:
ip = ipaddress.ip_address(vip)
network = ipaddress.ip_network(subnet_cidr)
vip = ip.exploded
prefixlen = network.prefixlen
except ValueError:
return webob.Response(json=dict(message="Invalid VIP"),
rendered_vips = self.render_vips(vips)
except ValueError as e:
vip_error_message = "Invalid VIP: {}".format(e)
return webob.Response(json=dict(message=vip_error_message),
status=400)
try:
vrrp_info = self.build_vrrp_info(vrrp_ip, subnet_cidr,
gateway, host_routes)
except ValueError:
return webob.Response(
json=dict(message="Invalid VRRP Address"), status=400)
# Check if the interface is already in the network namespace
# Do not attempt to re-plug the VIP if it is already in the
# network namespace
@ -70,46 +113,14 @@ class Plug(object):
self._osutils.write_vip_interface_file(
interface=primary_interface,
vip=vip,
ip_version=ip.version,
prefixlen=prefixlen,
gateway=gateway,
vips=rendered_vips,
mtu=mtu,
vrrp_ip=vrrp_ip,
host_routes=host_routes)
vrrp_info=vrrp_info)
# Update the list of interfaces to add to the namespace
# This is used in the amphora reboot case to re-establish the namespace
self._update_plugged_interfaces_file(primary_interface, mac_address)
# Create the namespace
netns = pyroute2.NetNS(consts.AMPHORA_NAMESPACE, flags=os.O_CREAT)
netns.close()
# Load sysctl in new namespace
sysctl = pyroute2.NSPopen(consts.AMPHORA_NAMESPACE,
[consts.SYSCTL_CMD, '--system'],
stdout=subprocess.PIPE)
sysctl.communicate()
sysctl.wait()
sysctl.release()
cmd_list = [['modprobe', 'ip_vs'],
[consts.SYSCTL_CMD, '-w', 'net.ipv4.vs.conntrack=1']]
if ip.version == 4:
# For lvs function, enable ip_vs kernel module, enable ip_forward
# conntrack in amphora network namespace.
cmd_list.append([consts.SYSCTL_CMD, '-w', 'net.ipv4.ip_forward=1'])
elif ip.version == 6:
cmd_list.append([consts.SYSCTL_CMD, '-w',
'net.ipv6.conf.all.forwarding=1'])
for cmd in cmd_list:
ns_exec = pyroute2.NSPopen(consts.AMPHORA_NAMESPACE, cmd,
stdout=subprocess.PIPE)
ns_exec.wait()
ns_exec.release()
with pyroute2.IPRoute() as ipr:
# Move the interfaces into the namespace
idx = ipr.link_lookup(address=mac_address)[0]
@ -119,10 +130,14 @@ class Plug(object):
# bring interfaces up
self._osutils.bring_interface_up(primary_interface, 'VIP')
vip_message = "VIPs plugged on interface {interface}: {vips}".format(
interface=primary_interface,
vips=", ".join([v['ip_address'] for v in rendered_vips])
)
return webob.Response(json=dict(
message="OK",
details="VIP {vip} plugged on interface {interface}".format(
vip=vip, interface=primary_interface)), status=202)
details=vip_message), status=202)
def _check_ip_addresses(self, fixed_ips):
if fixed_ips:
@ -144,24 +159,26 @@ class Plug(object):
# If we have net_info, this is the special case of plugging a new
# subnet on the vrrp port, which is essentially a re-vip-plug
if vip_net_info:
ip = ipaddress.ip_address(vip_net_info['vip'])
network = ipaddress.ip_network(vip_net_info['subnet_cidr'])
vip = ip.exploded
prefixlen = network.prefixlen
vrrp_ip = vip_net_info.get('vrrp_ip')
subnet_cidr = vip_net_info['subnet_cidr']
gateway = vip_net_info['gateway']
host_routes = vip_net_info.get('host_routes', ())
host_routes = vip_net_info.get('host_routes', [])
vips = [{
'ip_address': vip_net_info['vip'],
'subnet_cidr': subnet_cidr,
'gateway': gateway,
'host_routes': host_routes
}] + vip_net_info.get('additional_vips', [])
rendered_vips = self.render_vips(vips)
vrrp_info = self.build_vrrp_info(vrrp_ip, subnet_cidr,
gateway, host_routes)
self._osutils.write_vip_interface_file(
interface=existing_interface,
vip=vip,
ip_version=ip.version,
prefixlen=prefixlen,
gateway=gateway,
vrrp_ip=vrrp_ip,
host_routes=host_routes,
vips=rendered_vips,
mtu=mtu,
vrrp_info=vrrp_info,
fixed_ips=fixed_ips)
self._osutils.bring_interface_up(existing_interface, 'vip')
# Otherwise, we are just plugging a run-of-the-mill network

View File

@ -202,7 +202,8 @@ class Server(object):
net_info['mac_address'],
net_info.get('mtu'),
net_info.get('vrrp_ip'),
net_info.get('host_routes'))
net_info.get('host_routes', ()),
net_info.get('additional_vips', ()))
def plug_network(self):
try:

View File

@ -103,79 +103,88 @@ class InterfaceFile(object):
class VIPInterfaceFile(InterfaceFile):
def __init__(self, name, mtu,
vip, ip_version, prefixlen,
gateway, vrrp_ip, host_routes,
topology, fixed_ips=None):
def __init__(self, name, mtu, vips, vrrp_info, fixed_ips, topology):
super().__init__(name, mtu=mtu)
if vrrp_ip:
has_ipv4 = [True for vip in vips if vip['ip_version'] == 4]
has_ipv6 = [True for vip in vips if vip['ip_version'] == 6]
if vrrp_info:
self.addresses.append({
consts.ADDRESS: vrrp_ip,
consts.PREFIXLEN: prefixlen
consts.ADDRESS: vrrp_info['ip'],
consts.PREFIXLEN: vrrp_info['prefixlen']
})
else:
key = consts.DHCP if ip_version == 4 else consts.IPV6AUTO
self.addresses.append({
key: True
})
if has_ipv4:
self.addresses.append({
consts.DHCP: True
})
if has_ipv6:
self.addresses.append({
consts.IPV6AUTO: True
})
if gateway:
# Add default routes if there's a gateway
self.routes.append({
consts.DST: (
"::/0" if ip_version == 6 else "0.0.0.0/0"),
consts.GATEWAY: gateway,
consts.FLAGS: [consts.ONLINK]
ip_versions = set()
for vip in vips:
gateway = vip.get('gateway')
ip_version = vip.get('ip_version')
ip_versions.add(ip_version)
if gateway:
# Add default routes if there's a gateway
self.routes.append({
consts.DST: (
"::/0" if ip_version == 6 else "0.0.0.0/0"),
consts.GATEWAY: gateway,
consts.FLAGS: [consts.ONLINK]
})
self.routes.append({
consts.DST: (
"::/0" if ip_version == 6 else "0.0.0.0/0"),
consts.GATEWAY: gateway,
consts.FLAGS: [consts.ONLINK],
consts.TABLE: 1,
})
# In ACTIVE_STANDBY topology, keepalived sets some addresses,
# routes and rules.
# Keep track of those resources in the interface file but mark them
# with a special flag so the amphora-interface would not add/delete
# keepalived-maintained things.
ignore = topology == consts.TOPOLOGY_ACTIVE_STANDBY
prefixlen = vip['prefixlen']
if ignore:
# Keepalived sets this prefixlen for the addresses it maintains
vip_prefixlen = 32 if ip_version == 4 else 128
else:
vip_prefixlen = prefixlen
self.addresses.append({
consts.ADDRESS: vip['ip_address'],
consts.PREFIXLEN: vip_prefixlen,
consts.IGNORE: ignore
})
vip_cidr = ipaddress.ip_network(
"{}/{}".format(vip['ip_address'], prefixlen), strict=False)
self.routes.append({
consts.DST: (
"::/0" if ip_version == 6 else "0.0.0.0/0"),
consts.GATEWAY: gateway,
consts.FLAGS: [consts.ONLINK],
consts.DST: vip_cidr.exploded,
consts.PREFSRC: vip['ip_address'],
consts.SCOPE: 'link',
consts.TABLE: 1,
consts.IGNORE: ignore
})
self.rules.append({
consts.SRC: vip['ip_address'],
consts.SRC_LEN: 128 if ip_version == 6 else 32,
consts.TABLE: 1,
consts.IGNORE: ignore
})
# In ACTIVE_STANDBY topology, keepalived sets some addresses, routes
# and rules.
# Keep track of those resources in the interface file but mark them
# with a special flag so the amphora-interface would not add/delete
# keepalived-maintained things.
ignore = topology == consts.TOPOLOGY_ACTIVE_STANDBY
if ignore:
# Keepalived sets this prefixlen for the addresses it maintains
vip_prefixlen = 32 if ip_version == 4 else 128
else:
vip_prefixlen = prefixlen
self.addresses.append({
consts.ADDRESS: vip,
consts.PREFIXLEN: vip_prefixlen,
consts.IGNORE: ignore
})
vip_cidr = ipaddress.ip_network(
"{}/{}".format(vip, prefixlen), strict=False)
self.routes.append({
consts.DST: vip_cidr.exploded,
consts.PREFSRC: vip,
consts.SCOPE: 'link',
consts.TABLE: 1,
consts.IGNORE: ignore
})
self.rules.append({
consts.SRC: vip,
consts.SRC_LEN: 128 if ip_version == 6 else 32,
consts.TABLE: 1,
consts.IGNORE: ignore
})
self.routes.extend(self.get_host_routes(host_routes))
self.routes.extend(self.get_host_routes(host_routes,
table=1))
ip_versions = {ip_version}
self.routes.extend(self.get_host_routes(vip['host_routes']))
self.routes.extend(self.get_host_routes(vip['host_routes'],
table=1))
if fixed_ips:
for fixed_ip in fixed_ips:

View File

@ -30,10 +30,9 @@ V4_HEX_IP_REGEX = re.compile(r"(\w{2})(\w{2})(\w{2})(\w{2})")
V6_RS_VALUE_REGEX = re.compile(r"(\[[[\w{4}:]+\b\]:\w{4})\s+(.*$)")
NS_REGEX = re.compile(r"net_namespace\s(\w+-\w+)")
V4_VS_REGEX = re.compile(r"virtual_server\s([\d+\.]+\b)\s(\d{1,5})")
V4_RS_REGEX = re.compile(r"real_server\s([\d+\.]+\b)\s(\d{1,5})")
V6_VS_REGEX = re.compile(r"virtual_server\s([\w*:]+\b)\s(\d{1,5})")
V6_RS_REGEX = re.compile(r"real_server\s([\w*:]+\b)\s(\d{1,5})")
VS_ADDRESS_REGEX = re.compile(r"virtual_server_group .* {\n"
r"\s+([a-f\d\.:]+)\s(\d{1,5})\n")
RS_ADDRESS_REGEX = re.compile(r"real_server\s([a-f\d\.:]+)\s(\d{1,5})")
CONFIG_COMMENT_REGEX = re.compile(
r"#\sConfiguration\sfor\s(\w+)\s(\w{8}-\w{4}-\w{4}-\w{4}-\w{12})")
DISABLED_CONFIG_COMMENT_REGEX = re.compile(
@ -60,7 +59,7 @@ def read_kernel_file(ns_name, file_path):
return output
def get_listener_realserver_mapping(ns_name, listener_ip_port,
def get_listener_realserver_mapping(ns_name, listener_ip_ports,
health_monitor_enabled):
# returned result:
# actual_member_result = {'rs_ip:listened_port': {
@ -70,15 +69,18 @@ def get_listener_realserver_mapping(ns_name, listener_ip_port,
# 'ActiveConn': 0,
# 'InActConn': 0
# }}
listener_ip, listener_port = listener_ip_port.rsplit(':', 1)
ip_obj = ipaddress.ip_address(listener_ip.strip('[]'))
output = read_kernel_file(ns_name, KERNEL_LVS_PATH).split('\n')
if ip_obj.version == 4:
ip_to_hex_format = "%.8X" % ip_obj._ip
else:
ip_to_hex_format = r'\[' + ip_obj.exploded + r'\]'
port_hex_format = "%.4X" % int(listener_port)
idex = ip_to_hex_format + ':' + port_hex_format
idex_list = []
for listener_ip_port in listener_ip_ports:
listener_ip, listener_port = listener_ip_port.rsplit(':', 1)
ip_obj = ipaddress.ip_address(listener_ip.strip('[]'))
output = read_kernel_file(ns_name, KERNEL_LVS_PATH).split('\n')
if ip_obj.version == 4:
ip_to_hex_format = "%.8X" % ip_obj._ip
else:
ip_to_hex_format = r'\[' + ip_obj.exploded + r'\]'
port_hex_format = "%.4X" % int(listener_port)
idex_list.append(ip_to_hex_format + ':' + port_hex_format)
idex = "({})".format("|".join(idex_list))
if health_monitor_enabled:
member_status = constants.UP
@ -139,7 +141,7 @@ def get_listener_realserver_mapping(ns_name, listener_ip_port,
def get_lvs_listener_resource_ipports_nsname(listener_id):
# resource_ipport_mapping = {'Listener': {'id': listener-id,
# 'ipport': ipport},
# 'ipports': [ipport1, ipport2]},
# 'Pool': {'id': pool-id},
# 'Members': [{'id': member-id-1,
# 'ipport': ipport},
@ -150,11 +152,21 @@ def get_lvs_listener_resource_ipports_nsname(listener_id):
with open(util.keepalived_lvs_cfg_path(listener_id),
'r', encoding='utf-8') as f:
cfg = f.read()
ret = VS_ADDRESS_REGEX.findall(cfg)
def _escape_ip(ip):
ret = ipaddress.ip_address(ip)
if ret.version == 6:
return "[" + ret.compressed + "]"
return ret.compressed
listener_ip_ports = [
_escape_ip(ip_port[0]) + ":" + ip_port[1]
for ip_port in ret
]
ns_name = NS_REGEX.findall(cfg)[0]
listener_ip_port = V4_VS_REGEX.findall(cfg)
if not listener_ip_port:
listener_ip_port = V6_VS_REGEX.findall(cfg)
listener_ip_port = listener_ip_port[0] if listener_ip_port else []
disabled_resource_ids = DISABLED_CONFIG_COMMENT_REGEX.findall(cfg)
@ -164,7 +176,7 @@ def get_lvs_listener_resource_ipports_nsname(listener_id):
if listener_disabled:
return None, ns_name
if not listener_ip_port:
if not listener_ip_ports:
# If not get listener_ip_port from the lvs config file,
# that means the listener's default pool have no enabled member
# yet. But at this moment, we can get listener_id and ns_name, so
@ -175,9 +187,7 @@ def get_lvs_listener_resource_ipports_nsname(listener_id):
rs_ip_port_list = []
for line in cfg_line:
if 'real_server' in line:
res = V4_RS_REGEX.findall(line)
if not res:
res = V6_RS_REGEX.findall(line)
res = RS_ADDRESS_REGEX.findall(line)
rs_ip_port_list.append(res[0])
resource_type_ids = CONFIG_COMMENT_REGEX.findall(cfg)
@ -220,12 +230,8 @@ def get_lvs_listener_resource_ipports_nsname(listener_id):
rs_ip_port_list[index][0] + ':' +
rs_ip_port_list[index][1])
listener_ip = ipaddress.ip_address(listener_ip_port[0])
if listener_ip.version == 6:
listener_ip_port = (
'[' + listener_ip.compressed + ']', listener_ip_port[1])
resource_ipport_mapping['Listener']['ipport'] = (
listener_ip_port[0] + ':' + listener_ip_port[1])
resource_ipport_mapping['Listener']['ipports'] = (
listener_ip_ports)
return resource_ipport_mapping, ns_name
@ -262,7 +268,7 @@ def get_lvs_listener_pool_status(listener_id):
hm_enabled = len(CHECKER_REGEX.findall(cfg)) > 0
_, realserver_result = get_listener_realserver_mapping(
ns_name, resource_ipport_mapping['Listener']['ipport'],
ns_name, resource_ipport_mapping['Listener']['ipports'],
hm_enabled)
pool_status = constants.UP
member_results = {}
@ -436,30 +442,37 @@ def get_lvs_listeners_stats():
stats_res = get_ipvsadm_info(constants.AMPHORA_NAMESPACE,
is_stats_cmd=True)
for listener_id, ipport in ipport_mapping.items():
listener_ipport = ipport['Listener']['ipport']
listener_ipports = ipport['Listener']['ipports']
# This would be in Error, wait for the next loop to sync for the
# listener at this moment. Also this is for skip the case no enabled
# member in UDP listener, so we don't check it for failover.
if listener_ipport not in scur_res or listener_ipport not in stats_res:
scur_found = stats_found = False
for listener_ipport in listener_ipports:
if listener_ipport in scur_res:
scur_found = True
if listener_ipport in stats_res:
stats_found = True
if not scur_found or not stats_found:
continue
scur, bout, bin, stot, ereq = 0, 0, 0, 0, 0
# As all results contain this listener, so its status should be OPEN
status = constants.OPEN
# Get scur
for m in scur_res[listener_ipport]['Members']:
for item in m:
if item[0] == 'ActiveConn':
scur += int(item[1])
for listener_ipport in listener_ipports:
for m in scur_res[listener_ipport]['Members']:
for item in m:
if item[0] == 'ActiveConn':
scur += int(item[1])
# Get bout, bin, stot
for item in stats_res[listener_ipport]['Listener']:
if item[0] == 'Conns':
stot = int(item[1])
elif item[0] == 'OutBytes':
bout = int(item[1])
elif item[0] == 'InBytes':
bin = int(item[1])
# Get bout, bin, stot
for item in stats_res[listener_ipport]['Listener']:
if item[0] == 'Conns':
stot += int(item[1])
elif item[0] == 'OutBytes':
bout += int(item[1])
elif item[0] == 'InBytes':
bin += int(item[1])
listener_stats_res.update({
listener_id: {

View File

@ -151,7 +151,8 @@ class AmphoraLoadBalancerDriver(object, metaclass=abc.ABCMeta):
"""
def post_vip_plug(self, amphora, load_balancer, amphorae_network_config,
vrrp_port=None, vip_subnet=None):
vrrp_port=None, vip_subnet=None,
additional_vip_data=None):
"""Called after network driver has allocated and plugged the VIP
:param amphora:
@ -171,8 +172,13 @@ class AmphoraLoadBalancerDriver(object, metaclass=abc.ABCMeta):
:type vip_subnet: octavia.network.data_models.Subnet
:type vip_network: octavia.network.data_models.AmphoraNetworkConfig
:type additional_vip_data: list of
octavia.network.data_models.AdditionalVipData
:returns: None
This is to do any additional work needed on the amphorae to plug
the vip, such as bring up interfaces.
"""

View File

@ -400,11 +400,13 @@ class HaproxyAmphoraLoadBalancerDriver(
'mac_address': port[consts.MAC_ADDRESS],
'vrrp_ip': amphora[consts.VRRP_IP],
'mtu': mtu or port[consts.NETWORK][consts.MTU],
'host_routes': host_routes}
'host_routes': host_routes,
'additional_vips': []}
return net_info
def post_vip_plug(self, amphora, load_balancer, amphorae_network_config,
vrrp_port=None, vip_subnet=None):
vrrp_port=None, vip_subnet=None,
additional_vip_data=None):
if amphora.status != consts.DELETED:
self._populate_amphora_api_version(amphora)
if vip_subnet is None:
@ -420,6 +422,22 @@ class HaproxyAmphoraLoadBalancerDriver(
net_info = self._build_net_info(
port.to_dict(recurse=True), amphora.to_dict(),
vip_subnet.to_dict(recurse=True), mtu)
if additional_vip_data is None:
additional_vip_data = amphorae_network_config.get(
amphora.id).additional_vip_data
for add_vip in additional_vip_data:
LOG.debug('Filling net_info ADDITIONAL_VIPS: %(vips)s',
{'vips': add_vip})
add_host_routes = [{'nexthop': hr.nexthop,
'destination': hr.destination}
for hr in add_vip.subnet.host_routes]
add_net_info = {'subnet_cidr': add_vip.subnet.cidr,
'ip_address': add_vip.ip_address,
'gateway': add_vip.subnet.gateway_ip,
'host_routes': add_host_routes}
net_info['additional_vips'].append(add_net_info)
LOG.debug('Passing ADDITIONAL VIPS to the amphora: %(vips)s',
{'vips': net_info['additional_vips']})
try:
self.clients[amphora.api_version].plug_vip(
amphora, load_balancer.vip.ip_address, net_info)
@ -453,6 +471,7 @@ class HaproxyAmphoraLoadBalancerDriver(
port.to_dict(recurse=True), amphora.to_dict(),
amphora_network_config[consts.VIP_SUBNET],
port.network.mtu)
# TODO(gthiemonge) Need to handle additional vip data
net_info['vip'] = amphora.ha_ip
port_info['vip_net_info'] = net_info
try:

View File

@ -17,6 +17,7 @@ import os
import jinja2
from oslo_config import cfg
from oslo_log import log as logging
from octavia.amphorae.backends.agent.api_server import util
from octavia.common import constants
@ -26,6 +27,7 @@ KEEPALIVED_TEMPLATE = os.path.abspath(
os.path.join(os.path.dirname(__file__),
'templates/keepalived_base.template'))
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
class KeepalivedJinjaTemplater(object):
@ -53,12 +55,14 @@ class KeepalivedJinjaTemplater(object):
lstrip_blocks=True)
return self._jinja_env.get_template(os.path.basename(template_file))
def build_keepalived_config(self, loadbalancer, amphora, vip_cidr):
def build_keepalived_config(self, loadbalancer, amphora, amp_net_config):
"""Renders the loadblanacer keepalived configuration for Active/Standby
:param loadbalancer: A lodabalancer object
:param amp: An amphora object
:param vip_cidr: The VIP subnet cidr
:param loadbalancer: A loadbalancer object
:param amphora: An amphora object
:param amp_net_config: The amphora network config,
an AmphoraeNetworkConfig object in amphorav1,
a dict in amphorav2
"""
# Note on keepalived configuration: The current base configuration
# enforced Master election whenever a high priority VRRP instance
@ -69,17 +73,62 @@ class KeepalivedJinjaTemplater(object):
# to add the "nopreempt" flag in the backup instance section.
peers_ips = []
# Validate the VIP address and see if it is IPv6
vip = loadbalancer.vip.ip_address
vip_addr = ipaddress.ip_address(vip)
vip_ipv6 = vip_addr.version == 6
# Normalize and validate the VIP subnet CIDR
vip_network_cidr = None
if vip_ipv6:
vip_network_cidr = ipaddress.IPv6Network(vip_cidr).with_prefixlen
# Get the VIP subnet for the amphora
# For amphorav2 amphorae_network_config will be list of dicts
if isinstance(amp_net_config, dict):
additional_vip_data = amp_net_config['additional_vip_data']
vip_subnet = amp_net_config[constants.VIP_SUBNET]
else:
vip_network_cidr = ipaddress.IPv4Network(vip_cidr).with_prefixlen
additional_vip_data = [
add_vip.to_dict(recurse=True)
for add_vip in amp_net_config.additional_vip_data]
vip_subnet = amp_net_config.vip_subnet.to_dict()
# Sort VIPs by their IP so we can guarantee interface_index matching
sorted_add_vips = sorted(additional_vip_data,
key=lambda x: x['ip_address'])
# The primary VIP is always first in the list
vip_list = [{
'ip_address': loadbalancer.vip.ip_address,
'subnet': vip_subnet
}] + sorted_add_vips
# Handle the case of multiple IP family types
vrrp_addr = ipaddress.ip_address(amphora.vrrp_ip)
vrrp_ipv6 = vrrp_addr.version == 6
# Handle all VIPs:
rendered_vips = []
for index, add_vip in enumerate(vip_list):
# Validate the VIP address and see if it is IPv6
vip = add_vip['ip_address']
vip_addr = ipaddress.ip_address(vip)
vip_ipv6 = vip_addr.version == 6
vip_cidr = add_vip['subnet']['cidr']
# Normalize and validate the VIP subnet CIDR
# TODO(gthiemonge) Don't require a if block
if vip_ipv6:
vip_network_cidr = ipaddress.IPv6Network(
vip_cidr).with_prefixlen
else:
vip_network_cidr = ipaddress.IPv4Network(
vip_cidr).with_prefixlen
host_routes = add_vip['subnet'].get('host_routes', [])
# Addresses that aren't the same family as the VRRP
# interface will be in the "excluded" block
rendered_vips.append({
'ip_address': vip,
'network_cidr': vip_network_cidr,
'ipv6': vip_ipv6,
'interface_index': index,
'gateway': add_vip['subnet']['gateway_ip'],
'excluded': vip_ipv6 != vrrp_ipv6,
'host_routes': host_routes
})
for amp in filter(
lambda amp: amp.status == constants.AMPHORA_ALLOCATED,
@ -100,7 +149,6 @@ class KeepalivedJinjaTemplater(object):
'vrrp_auth_pass': loadbalancer.vrrp_group.vrrp_auth_pass,
'amp_vrrp_ip': amphora.vrrp_ip,
'peers_vrrp_ips': peers_ips,
'vip_ip_address': vip,
'advert_int': loadbalancer.vrrp_group.advert_int,
'check_script_path': util.keepalived_check_script_path(),
'vrrp_check_interval':
@ -108,6 +156,5 @@ class KeepalivedJinjaTemplater(object):
'vrrp_fail_count': CONF.keepalived_vrrp.vrrp_fail_count,
'vrrp_success_count':
CONF.keepalived_vrrp.vrrp_success_count,
'vip_network_cidr': vip_network_cidr,
'vip_ipv6': vip_ipv6},
'vips': rendered_vips},
constants=constants)

View File

@ -14,47 +14,65 @@
# under the License.
#}
vrrp_script check_script {
script {{ check_script_path }}
interval {{ vrrp_check_interval }}
fall {{ vrrp_fail_count }}
rise {{ vrrp_success_count }}
script {{ check_script_path }}
interval {{ vrrp_check_interval }}
fall {{ vrrp_fail_count }}
rise {{ vrrp_success_count }}
}
vrrp_instance {{ vrrp_group_name }} {
state {{ amp_role }}
interface {{ amp_intf }}
virtual_router_id {{ amp_vrrp_id }}
priority {{ amp_priority }}
nopreempt
accept
garp_master_refresh {{ vrrp_garp_refresh }}
garp_master_refresh_repeat {{ vrrp_garp_refresh_repeat }}
advert_int {{ advert_int }}
authentication {
auth_type {{ vrrp_auth_type }}
auth_pass {{ vrrp_auth_pass }}
}
state {{ amp_role }}
interface {{ amp_intf }}
virtual_router_id {{ amp_vrrp_id }}
priority {{ amp_priority }}
nopreempt
accept
garp_master_refresh {{ vrrp_garp_refresh }}
garp_master_refresh_repeat {{ vrrp_garp_refresh_repeat }}
advert_int {{ advert_int }}
authentication {
auth_type {{ vrrp_auth_type }}
auth_pass {{ vrrp_auth_pass }}
}
unicast_src_ip {{ amp_vrrp_ip }}
unicast_peer {
unicast_src_ip {{ amp_vrrp_ip }}
unicast_peer {
{% for amp_vrrp_ip in peers_vrrp_ips %}
{{ amp_vrrp_ip }}
{{ amp_vrrp_ip }}
{% endfor %}
}
}
virtual_ipaddress {
{{ vip_ip_address }}
}
virtual_ipaddress {
{% for vip in vips if not vip.excluded %}
{{ vip.ip_address }}
{% endfor %}
}
virtual_routes {
{{ vip_network_cidr }} dev {{ amp_intf }} src {{ vip_ip_address }} scope link table 1
}
virtual_ipaddress_excluded {
{% for vip in vips if vip.excluded %}
{{ vip.ip_address }}
{% endfor %}
}
virtual_rules {
from {{ vip_ip_address }}/{{ '128' if vip_ipv6 else '32' }} table 1 priority 100
}
virtual_routes {
{% for vip in vips %}
{{ vip.network_cidr }} dev {{ amp_intf }} src {{ vip.ip_address }} scope link table {{ 1 + vip.interface_index }}
{% if vip.gateway %}
default via {{ vip.gateway }} dev {{ amp_intf }} onlink table {{ 1 + vip.interface_index }}
{% endif %}
{% for host_route in vip.host_routes %}
{{ host_route.destination }} dev {{ amp_intf }} gateway {{ host_route.nexthop }} onlink table {{ 1 + vip.interface_index }}
{% endfor %}
{% endfor %}
}
track_script {
check_script
}
virtual_rules {
{% for vip in vips %}
from {{ vip.ip_address }}/{{ '128' if vip.ipv6 else '32' }} table {{ 1 + vip.interface_index }} priority 100
{% endfor %}
}
track_script {
check_script
}
}

View File

@ -51,17 +51,10 @@ class KeepalivedAmphoraDriverMixin(driver_base.VRRPDriverMixin):
LOG.debug("Update amphora %s VRRP configuration.", amphora.id)
self._populate_amphora_api_version(amphora)
# Get the VIP subnet prefix for the amphora
# For amphorav2 amphorae_network_config will be list of dicts
try:
vip_cidr = amphorae_network_config[amphora.id].vip_subnet.cidr
except AttributeError:
vip_cidr = amphorae_network_config[amphora.id][
constants.VIP_SUBNET][constants.CIDR]
# Generate Keepalived configuration from loadbalancer object
config = templater.build_keepalived_config(
loadbalancer, amphora, vip_cidr)
loadbalancer, amphora, amphorae_network_config[amphora.id])
self.clients[amphora.api_version].upload_vrrp_config(amphora, config)
def stop_vrrp_service(self, loadbalancer):

View File

@ -93,7 +93,8 @@ class NoopManager(object):
'post_network_plug')
def post_vip_plug(self, amphora, load_balancer, amphorae_network_config,
vrrp_port=None, vip_subnet=None):
vrrp_port=None, vip_subnet=None,
additional_vip_data=None):
LOG.debug("Amphora %s no-op, post vip plug load balancer %s",
self.__class__.__name__, load_balancer.id)
self.amphoraconfig[(load_balancer.id, id(amphorae_network_config))] = (
@ -166,11 +167,13 @@ class NoopAmphoraLoadBalancerDriver(
self.driver.post_network_plug(amphora, port, amphora_network_config)
def post_vip_plug(self, amphora, load_balancer, amphorae_network_config,
vrrp_port=None, vip_subnet=None):
vrrp_port=None, vip_subnet=None,
additional_vip_data=None):
self.driver.post_vip_plug(amphora,
load_balancer, amphorae_network_config,
vrrp_port=vrrp_port, vip_subnet=vip_subnet)
vrrp_port=vrrp_port, vip_subnet=vip_subnet,
additional_vip_data=additional_vip_data)
def upload_cert_amp(self, amphora, pem_file):

View File

@ -98,10 +98,15 @@ class AmphoraProviderDriver(driver_base.ProviderDriver):
operator_fault_string=msg)
# Load Balancer
def create_vip_port(self, loadbalancer_id, project_id, vip_dictionary):
def create_vip_port(self, loadbalancer_id, project_id, vip_dictionary,
additional_vip_dicts):
vip_obj = driver_utils.provider_vip_dict_to_vip_obj(vip_dictionary)
add_vip_objs = [
driver_utils.provider_additional_vip_dict_to_vip_obj(add_vip)
for add_vip in additional_vip_dicts]
lb_obj = data_models.LoadBalancer(id=loadbalancer_id,
project_id=project_id, vip=vip_obj)
project_id=project_id, vip=vip_obj,
additional_vips=add_vip_objs)
network_driver = utils.get_network_driver()
vip_network = network_driver.get_network(
@ -112,7 +117,7 @@ class AmphoraProviderDriver(driver_base.ProviderDriver):
operator_fault_string=message)
try:
vip = network_driver.allocate_vip(lb_obj)
vip, add_vips = network_driver.allocate_vip(lb_obj)
except network_base.AllocateVIPException as e:
message = str(e)
if getattr(e, 'orig_msg', None) is not None:
@ -122,7 +127,10 @@ class AmphoraProviderDriver(driver_base.ProviderDriver):
LOG.info('Amphora provider created VIP port %s for load balancer %s.',
vip.port_id, loadbalancer_id)
return driver_utils.vip_dict_to_provider_dict(vip.to_dict())
vip_return_dict = driver_utils.vip_dict_to_provider_dict(vip.to_dict())
add_return_dicts = [driver_utils.additional_vip_dict_to_provider_dict(
add_vip.to_dict()) for add_vip in add_vips]
return vip_return_dict, add_return_dicts
# TODO(johnsom) convert this to octavia_lib constant flavor
# once octavia is transitioned to use octavia_lib

View File

@ -100,10 +100,15 @@ class AmphoraProviderDriver(driver_base.ProviderDriver):
operator_fault_string=msg)
# Load Balancer
def create_vip_port(self, loadbalancer_id, project_id, vip_dictionary):
def create_vip_port(self, loadbalancer_id, project_id, vip_dictionary,
additional_vip_dicts):
vip_obj = driver_utils.provider_vip_dict_to_vip_obj(vip_dictionary)
add_vip_objs = [
driver_utils.provider_additional_vip_dict_to_vip_obj(add_vip)
for add_vip in additional_vip_dicts]
lb_obj = data_models.LoadBalancer(id=loadbalancer_id,
project_id=project_id, vip=vip_obj)
project_id=project_id, vip=vip_obj,
additional_vips=add_vip_objs)
network_driver = utils.get_network_driver()
vip_network = network_driver.get_network(
@ -114,7 +119,7 @@ class AmphoraProviderDriver(driver_base.ProviderDriver):
operator_fault_string=message)
try:
vip = network_driver.allocate_vip(lb_obj)
vip, add_vips = network_driver.allocate_vip(lb_obj)
except network_base.AllocateVIPException as e:
message = str(e)
if getattr(e, 'orig_msg', None) is not None:
@ -124,7 +129,10 @@ class AmphoraProviderDriver(driver_base.ProviderDriver):
LOG.info('Amphora provider created VIP port %s for load balancer %s.',
vip.port_id, loadbalancer_id)
return driver_utils.vip_dict_to_provider_dict(vip.to_dict())
vip_return_dict = driver_utils.vip_dict_to_provider_dict(vip.to_dict())
add_return_dicts = [driver_utils.additional_vip_dict_to_provider_dict(
add_vip.to_dict()) for add_vip in add_vips]
return vip_return_dict, add_return_dicts
# TODO(johnsom) convert this to octavia_lib constant flavor
# once octavia is transitioned to use octavia_lib
@ -325,9 +333,19 @@ class AmphoraProviderDriver(driver_base.ProviderDriver):
for listener in db_pool.listeners:
lb = listener.load_balancer
vip_is_ipv6 = utils.is_ipv6(lb.vip.ip_address)
vips = [lb.vip]
vips.extend(lb.additional_vips)
lb_has_ipv4 = [
True
for vip in vips
if utils.is_ipv4(vip.ip_address)]
lb_has_ipv6 = [
True
for vip in vips
if utils.is_ipv6(vip.ip_address)]
if member_is_ipv6 != vip_is_ipv6:
if ((member_is_ipv6 and not lb_has_ipv6) or
(not member_is_ipv6 and not lb_has_ipv4)):
msg = ("This provider doesn't support mixing IPv4 and "
"IPv6 addresses for its VIP and members in {} "
"load balancers.".format(db_pool.protocol))

View File

@ -18,6 +18,8 @@ from oslo_utils import uuidutils
from octavia_lib.api.drivers import data_models
from octavia_lib.api.drivers import provider_base as driver_base
from octavia.api.drivers import utils as driver_utils
LOG = logging.getLogger(__name__)
@ -27,12 +29,14 @@ class NoopManager(object):
self.driverconfig = {}
# Load Balancer
def create_vip_port(self, loadbalancer_id, project_id, vip_dictionary):
def create_vip_port(self, loadbalancer_id, project_id, vip_dictionary,
additional_vip_dicts):
LOG.debug('Provider %s no-op, create_vip_port loadbalancer %s',
self.__class__.__name__, loadbalancer_id)
self.driverconfig[loadbalancer_id] = (loadbalancer_id, project_id,
vip_dictionary,
additional_vip_dicts,
'create_vip_port')
vip_address = vip_dictionary.get('vip_address', '198.0.2.5')
@ -43,10 +47,16 @@ class NoopManager(object):
vip_subnet_id = vip_dictionary.get('vip_subnet_id',
uuidutils.generate_uuid())
return data_models.VIP(vip_address=vip_address,
vip_network_id=vip_network_id,
vip_port_id=vip_port_id,
vip_subnet_id=vip_subnet_id).to_dict()
vip = data_models.VIP(vip_address=vip_address,
vip_network_id=vip_network_id,
vip_port_id=vip_port_id,
vip_subnet_id=vip_subnet_id)
vip_return_dict = vip.to_dict()
additional_vip_dicts = additional_vip_dicts or []
add_return_dicts = [driver_utils.additional_vip_dict_to_provider_dict(
add_vip) for add_vip in additional_vip_dicts]
return vip_return_dict, add_return_dicts
def loadbalancer_create(self, loadbalancer):
LOG.debug('Provider %s no-op, loadbalancer_create loadbalancer %s',
@ -266,9 +276,11 @@ class NoopProviderDriver(driver_base.ProviderDriver):
self.driver = NoopManager()
# Load Balancer
def create_vip_port(self, loadbalancer_id, project_id, vip_dictionary):
def create_vip_port(self, loadbalancer_id, project_id, vip_dictionary,
additional_vip_dicts):
return self.driver.create_vip_port(loadbalancer_id, project_id,
vip_dictionary)
vip_dictionary,
additional_vip_dicts)
def loadbalancer_create(self, loadbalancer):
self.driver.loadbalancer_create(loadbalancer)

View File

@ -120,7 +120,7 @@ def _base_to_provider_dict(current_dict, include_project_id=False):
# Note: The provider dict returned from this method will have provider
# data model objects in it.
def lb_dict_to_provider_dict(lb_dict, vip=None, db_pools=None,
def lb_dict_to_provider_dict(lb_dict, vip=None, add_vips=None, db_pools=None,
db_listeners=None, for_delete=False):
new_lb_dict = _base_to_provider_dict(lb_dict, include_project_id=True)
new_lb_dict['loadbalancer_id'] = new_lb_dict.pop('id')
@ -134,6 +134,9 @@ def lb_dict_to_provider_dict(lb_dict, vip=None, db_pools=None,
flavor_repo = repositories.FlavorRepository()
new_lb_dict['flavor'] = flavor_repo.get_flavor_metadata_dict(
db_api.get_session(), lb_dict['flavor_id'])
if add_vips:
new_lb_dict['additional_vips'] = db_additional_vips_to_provider_vips(
add_vips)
if db_pools:
new_lb_dict['pools'] = db_pools_to_provider_pools(
db_pools, for_delete=for_delete)
@ -326,6 +329,14 @@ def listener_dict_to_provider_dict(listener_dict, for_delete=False):
return new_listener_dict
def db_additional_vips_to_provider_vips(db_add_vips):
provider_add_vips = []
for add_vip in db_add_vips:
provider_add_vips.append(
additional_vip_dict_to_provider_dict(add_vip.to_dict()))
return provider_add_vips
def db_pools_to_provider_pools(db_pools, for_delete=False):
provider_pools = []
for pool in db_pools:
@ -554,6 +565,19 @@ def vip_dict_to_provider_dict(vip_dict):
return new_vip_dict
def additional_vip_dict_to_provider_dict(vip_dict):
new_vip_dict = {}
if 'ip_address' in vip_dict:
new_vip_dict['ip_address'] = vip_dict['ip_address']
if 'network_id' in vip_dict:
new_vip_dict['network_id'] = vip_dict['network_id']
if 'port_id' in vip_dict:
new_vip_dict['port_id'] = vip_dict['port_id']
if 'subnet_id' in vip_dict:
new_vip_dict['subnet_id'] = vip_dict['subnet_id']
return new_vip_dict
def provider_vip_dict_to_vip_obj(vip_dictionary):
vip_obj = data_models.Vip()
if 'vip_address' in vip_dictionary:
@ -569,3 +593,16 @@ def provider_vip_dict_to_vip_obj(vip_dictionary):
if constants.OCTAVIA_OWNED in vip_dictionary:
vip_obj.octavia_owned = vip_dictionary[constants.OCTAVIA_OWNED]
return vip_obj
def provider_additional_vip_dict_to_vip_obj(vip_dictionary):
vip_obj = data_models.Vip()
if 'ip_address' in vip_dictionary:
vip_obj.ip_address = vip_dictionary['ip_address']
if 'network_id' in vip_dictionary:
vip_obj.network_id = vip_dictionary['network_id']
if 'port_id' in vip_dictionary:
vip_obj.port_id = vip_dictionary['port_id']
if 'subnet_id' in vip_dictionary:
vip_obj.subnet_id = vip_dictionary['subnet_id']
return vip_obj

View File

@ -139,4 +139,7 @@ class RootController(object):
# ALPN protocols (pool)
self._add_a_version(versions, 'v2.24', 'v2', 'CURRENT',
'2020-10-15T00:00:00Z', host_url)
# Additional VIPs
self._add_a_version(versions, 'v2.25', 'v2', 'CURRENT',
'2020-04-08T00:00:00Z', host_url)
return {'versions': versions}

View File

@ -223,6 +223,33 @@ class LoadBalancersController(base.BaseController):
"VIP port's subnet could not be determined. Please "
"specify either a VIP subnet or address."))
@staticmethod
def _validate_subnets_share_network_but_no_duplicates(load_balancer):
# Validate that no subnet_id is used more than once
subnet_use_counts = {load_balancer.vip_subnet_id: 1}
for vip in load_balancer.additional_vips:
if vip.subnet_id in subnet_use_counts:
raise exceptions.ValidationException(detail=_(
'Duplicate VIP subnet(s) specified. Only one IP can be '
'bound per subnet.'))
subnet_use_counts[vip.subnet_id] = 1
# Validate that all subnets belong to the same network
network_driver = utils.get_network_driver()
used_subnets = {}
for subnet_id in subnet_use_counts:
used_subnets[subnet_id] = network_driver.get_subnet(subnet_id)
all_networks = [subnet.network_id for subnet in used_subnets.values()]
if len(set(all_networks)) > 1:
LOG.debug("Used subnets: %(subnets)s", {'subnets': used_subnets})
LOG.debug("All networks: %(networks)s", {'networks': all_networks})
raise exceptions.ValidationException(detail=_(
'All VIP subnets must belong to the same network.'
))
# Fill the network_id for each additional_vip
for vip in load_balancer.additional_vips:
vip.network_id = used_subnets[vip.subnet_id]
def _validate_vip_request_object(self, load_balancer, context=None):
allowed_network_objects = []
if CONF.networking.allow_vip_port_id:
@ -270,7 +297,18 @@ class LoadBalancersController(base.BaseController):
validate.qos_policy_exists(
qos_policy_id=load_balancer.vip_qos_policy_id)
def _create_vip_port_if_not_exist(self, load_balancer_db):
# Even though we've just validated the subnet or else retrieved its ID
# directly from the port, we might still be missing the network.
if not load_balancer.vip_network_id:
subnet = validate.subnet_exists(
subnet_id=load_balancer.vip_subnet_id)
load_balancer.vip_network_id = subnet.network_id
# Multi-vip validation for ensuring subnets are "sane"
self._validate_subnets_share_network_but_no_duplicates(load_balancer)
@staticmethod
def _create_vip_port_if_not_exist(load_balancer_db):
"""Create vip port."""
network_driver = utils.get_network_driver()
try:
@ -436,6 +474,7 @@ class LoadBalancersController(base.BaseController):
render_unsets=False
))
vip_dict = lb_dict.pop('vip', {})
additional_vip_dicts = lb_dict.pop('additional_vips', [])
# Make sure we store the right provider in the DB
lb_dict['provider'] = driver.name
@ -455,7 +494,7 @@ class LoadBalancersController(base.BaseController):
valid_networks=az_dict.get(constants.VALID_VIP_NETWORKS))
db_lb = self.repositories.create_load_balancer_and_vip(
lock_session, lb_dict, vip_dict)
lock_session, lb_dict, vip_dict, additional_vip_dicts)
# Pass the flavor dictionary through for the provider drivers
# This is a "virtual" lb_dict item that includes the expanded
@ -473,14 +512,20 @@ class LoadBalancersController(base.BaseController):
try:
provider_vip_dict = driver_utils.vip_dict_to_provider_dict(
vip_dict)
vip_dict = driver_utils.call_provider(
provider_additional_vips = [
driver_utils.additional_vip_dict_to_provider_dict(add_vip)
for add_vip in additional_vip_dicts]
vip_dict, additional_vip_dicts = driver_utils.call_provider(
driver.name, driver.create_vip_port, db_lb.id,
db_lb.project_id, provider_vip_dict)
db_lb.project_id, provider_vip_dict,
provider_additional_vips)
vip = driver_utils.provider_vip_dict_to_vip_obj(vip_dict)
add_vips = [data_models.AdditionalVip(**add_vip)
for add_vip in additional_vip_dicts]
except exceptions.ProviderNotImplementedError:
# create vip port if not exist, driver didn't want to create
# the VIP port
vip = self._create_vip_port_if_not_exist(db_lb)
vip, add_vips = self._create_vip_port_if_not_exist(db_lb)
LOG.info('Created VIP port %s for provider %s.',
vip.port_id, driver.name)
# If a port_id wasn't passed in and we made it this far
@ -496,6 +541,11 @@ class LoadBalancersController(base.BaseController):
lock_session, db_lb.id, ip_address=vip.ip_address,
port_id=vip.port_id, network_id=vip.network_id,
subnet_id=vip.subnet_id, octavia_owned=octavia_owned)
for add_vip in add_vips:
self.repositories.additional_vip.update(
lock_session, db_lb.id, ip_address=add_vip.ip_address,
port_id=add_vip.port_id, network_id=add_vip.network_id,
subnet_id=add_vip.subnet_id)
if listeners or pools:
db_pools, db_lists = self._graph_create(
@ -503,7 +553,7 @@ class LoadBalancersController(base.BaseController):
# Prepare the data for the driver data model
driver_lb_dict = driver_utils.lb_dict_to_provider_dict(
lb_dict, vip, db_pools, db_lists)
lb_dict, vip, add_vips, db_pools, db_lists)
# Dispatch to the driver
LOG.info("Sending create Load Balancer %s to provider %s",

View File

@ -34,6 +34,12 @@ class BaseLoadBalancerType(types.BaseType):
'qos_policy_id': 'vip_qos_policy_id'}}
class AdditionalVipsType(types.BaseType):
"""Type for additional vips"""
subnet_id = wtypes.wsattr(wtypes.UuidType(), mandatory=True)
ip_address = wtypes.wsattr(types.IPAddressType())
class LoadBalancerResponse(BaseLoadBalancerType):
"""Defines which attributes are to be shown on any response."""
id = wtypes.wsattr(wtypes.UuidType())
@ -49,6 +55,7 @@ class LoadBalancerResponse(BaseLoadBalancerType):
vip_port_id = wtypes.wsattr(wtypes.UuidType())
vip_subnet_id = wtypes.wsattr(wtypes.UuidType())
vip_network_id = wtypes.wsattr(wtypes.UuidType())
additional_vips = wtypes.wsattr([AdditionalVipsType])
listeners = wtypes.wsattr([types.IdOnlyType])
pools = wtypes.wsattr([types.IdOnlyType])
provider = wtypes.wsattr(wtypes.StringType())
@ -67,6 +74,9 @@ class LoadBalancerResponse(BaseLoadBalancerType):
result.vip_address = data_model.vip.ip_address
result.vip_network_id = data_model.vip.network_id
result.vip_qos_policy_id = data_model.vip.qos_policy_id
result.additional_vips = [
AdditionalVipsType.from_data_model(i)
for i in data_model.additional_vips]
if cls._full_response():
listener_model = listener.ListenerFullResponse
pool_model = pool.PoolFullResponse
@ -117,6 +127,7 @@ class LoadBalancerPOST(BaseLoadBalancerType):
vip_subnet_id = wtypes.wsattr(wtypes.UuidType())
vip_network_id = wtypes.wsattr(wtypes.UuidType())
vip_qos_policy_id = wtypes.wsattr(wtypes.UuidType())
additional_vips = wtypes.wsattr([AdditionalVipsType], default=[])
project_id = wtypes.wsattr(wtypes.StringType(max_length=36))
listeners = wtypes.wsattr([listener.ListenerSingleCreate], default=[])
pools = wtypes.wsattr([pool.PoolSingleCreate], default=[])

View File

@ -299,6 +299,7 @@ SUPPORTED_TASKFLOW_ENGINE_TYPES = ['serial', 'parallel']
ACTIVE_CONNECTIONS = 'active_connections'
ADD_NICS = 'add_nics'
ADD_SUBNETS = 'add_subnets'
ADDITIONAL_VIPS = 'additional_vips'
ADMIN_STATE_UP = 'admin_state_up'
ALLOWED_ADDRESS_PAIRS = 'allowed_address_pairs'
AMP_DATA = 'amp_data'

View File

@ -489,7 +489,8 @@ class LoadBalancer(BaseDataModel):
topology=None, vip=None, listeners=None, amphorae=None,
pools=None, vrrp_group=None, server_group_id=None,
created_at=None, updated_at=None, provider=None, tags=None,
flavor_id=None, availability_zone=None):
flavor_id=None, availability_zone=None,
additional_vips=None):
self.id = id
self.project_id = project_id
@ -511,6 +512,7 @@ class LoadBalancer(BaseDataModel):
self.tags = tags or []
self.flavor_id = flavor_id
self.availability_zone = availability_zone
self.additional_vips = additional_vips or []
def update(self, update_dict):
for key, value in update_dict.items():
@ -553,6 +555,18 @@ class Vip(BaseDataModel):
self.octavia_owned = octavia_owned
class AdditionalVip(BaseDataModel):
def __init__(self, load_balancer_id=None, ip_address=None, subnet_id=None,
network_id=None, port_id=None, load_balancer=None):
self.load_balancer_id = load_balancer_id
self.ip_address = ip_address
self.subnet_id = subnet_id
self.network_id = network_id
self.port_id = port_id
self.load_balancer = load_balancer
class SNI(BaseDataModel):
def __init__(self, listener_id=None, position=None, listener=None,

View File

@ -185,10 +185,13 @@ class JinjaTemplater(object):
continue
listener_transforms.append(self._transform_listener(
listener, tls_certs, feature_compatibility, loadbalancer))
additional_vips = [
vip.ip_address for vip in loadbalancer.additional_vips]
ret_value = {
'id': loadbalancer.id,
'vip_address': loadbalancer.vip.ip_address,
'additional_vips': additional_vips,
'listeners': listener_transforms,
'topology': loadbalancer.topology,
'enabled': loadbalancer.enabled,

View File

@ -31,7 +31,8 @@
{% block proxies %}
{% if loadbalancer.enabled %}
{% for listener in loadbalancer.listeners if listener.enabled %}
{{- frontend_macro(constants, lib_consts, listener, loadbalancer.vip_address) }}
{{- frontend_macro(constants, lib_consts, listener, loadbalancer.vip_address,
loadbalancer.additional_vips) }}
{% for pool in listener.pools if pool.enabled %}
{{- backend_macro(constants, lib_consts, listener, pool, loadbalancer) }}
{% endfor %}

View File

@ -158,7 +158,7 @@ bind {{ lb_vip_address }}:{{ listener.protocol_port }} {{
{% endmacro %}
{% macro frontend_macro(constants, lib_consts, listener, lb_vip_address) %}
{% macro frontend_macro(constants, lib_consts, listener, lb_vip_address, additional_vips) %}
frontend {{ listener.id }}
{% if listener.connection_limit is defined %}
maxconn {{ listener.connection_limit }}
@ -168,6 +168,9 @@ frontend {{ listener.id }}
redirect scheme https if !{ ssl_fc }
{% endif %}
{{ bind_macro(constants, lib_consts, listener, lb_vip_address)|trim() }}
{% for add_vip in additional_vips %}
{{ bind_macro(constants, lib_consts, listener, add_vip)|trim() }}
{% endfor %}
mode {{ listener.protocol_mode }}
{% for l7policy in listener.l7policies if (l7policy.enabled and
l7policy.l7rules|length > 0) %}

View File

@ -16,6 +16,7 @@ import os
import re
import jinja2
from oslo_log import log as logging
from octavia.common.config import cfg
from octavia.common import constants
@ -50,6 +51,7 @@ HAPROXY_TEMPLATE = os.path.abspath(
'templates/haproxy.cfg.j2'))
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
JINJA_ENV = None
@ -180,9 +182,14 @@ class JinjaTemplater(object):
listener, feature_compatibility, loadbalancer,
client_ca_filename=client_ca_filename, client_crl=client_crl,
pool_tls_certs=pool_tls_certs)
additional_vips = [
vip.ip_address for vip in loadbalancer.additional_vips]
LOG.debug('Got vips: %(vips)s', {'vips': loadbalancer.additional_vips})
LOG.debug('Got vips: %(vips)s', {'vips': additional_vips})
ret_value = {
'id': loadbalancer.id,
'vip_address': loadbalancer.vip.ip_address,
'additional_vips': additional_vips,
'listener': t_listener,
'topology': loadbalancer.topology,
'enabled': loadbalancer.enabled,

View File

@ -32,7 +32,7 @@
{% block proxies %}
{% if loadbalancer.enabled and loadbalancer.listener.enabled %}
{{- frontend_macro(constants, loadbalancer.listener,
loadbalancer.vip_address) }}
loadbalancer.vip_address, loadbalancer.additional_vips) }}
{% for pool in loadbalancer.listener.pools if pool.enabled %}
{{- backend_macro(constants, loadbalancer.listener, pool) }}
{% endfor %}

View File

@ -132,7 +132,7 @@ bind {{ lb_vip_address }}:{{ listener.protocol_port }} {{
{% endmacro %}
{% macro frontend_macro(constants, listener, lb_vip_address) %}
{% macro frontend_macro(constants, listener, lb_vip_address, additional_vips) %}
frontend {{ listener.id }}
{% if listener.connection_limit is defined %}
maxconn {{ listener.connection_limit }}
@ -142,6 +142,9 @@ frontend {{ listener.id }}
redirect scheme https if !{ ssl_fc }
{% endif %}
{{ bind_macro(constants, listener, lb_vip_address)|trim() }}
{% for add_vip in additional_vips %}
{{ bind_macro(constants, listener, add_vip)|trim() }}
{% endfor %}
mode {{ listener.protocol_mode }}
{% for l7policy in listener.l7policies if (l7policy.enabled and
l7policy.l7rules|length > 0) %}

View File

@ -103,11 +103,19 @@ class LvsJinjaTemplater(object):
be processed by the templating system
"""
t_listener = self._transform_listener(listener)
vips = [
{
'ip_address': vip.ip_address,
'ip_version': octavia_utils.ip_version(
vip.ip_address),
}
for vip in [loadbalancer.vip] + loadbalancer.additional_vips
]
ret_value = {
'id': loadbalancer.id,
'vip_address': loadbalancer.vip.ip_address,
'vips': vips,
'listener': t_listener,
'enabled': loadbalancer.enabled
'enabled': loadbalancer.enabled,
}
return ret_value
@ -182,6 +190,8 @@ class LvsJinjaTemplater(object):
return {
'id': member.id,
'address': member.ip_address,
'ip_version': octavia_utils.ip_version(
member.ip_address),
'protocol_port': member.protocol_port,
'weight': member.weight,
'enabled': member.enabled,

View File

@ -23,7 +23,7 @@ net_namespace {{ constants.AMPHORA_NAMESPACE }}
{% if loadbalancer.enabled and loadbalancer.listener.enabled %}
{{- virtualserver_macro(constants, lib_consts,
loadbalancer.listener,
loadbalancer.vip_address,
loadbalancer.vips,
loadbalancer.listener.get('default_pool', None)) }}
{% endif %}
{% endblock proxies %}

View File

@ -101,9 +101,25 @@ TCP_CHECK {
{% endif %}
{% endmacro %}
{% macro virtualserver_macro(constants, lib_consts, listener, lb_vip_address, default_pool) %}
{% macro virtualserver_macro(constants, lib_consts, listener, vips, default_pool) %}
{% if default_pool %}
virtual_server {{ lb_vip_address }} {{ listener.protocol_port }} {
{% for ip_version in (4, 6) %}
{%- set has_vip = namespace(found=False) %}
{%- for vip in vips %}
{%- if vip.ip_version == ip_version %}
{%- set has_vip.found = True %}
{%- endif %}
{%- endfor %}
{% if has_vip.found %}
virtual_server_group ipv{{ ip_version }}-group {
{% for vip in vips %}
{% if vip.ip_version == ip_version %}
{{ vip.ip_address }} {{ listener.protocol_port }}
{% endif %}
{% endfor %}
}
virtual_server group ipv{{ ip_version }}-group {
{{ lb_algo_macro(default_pool) }}
lb_kind NAT
protocol {{ listener.protocol_mode.upper() }}
@ -133,9 +149,13 @@ virtual_server {{ lb_vip_address }} {{ listener.protocol_port }} {
# Configuration for HealthMonitor {{ default_pool.health_monitor.id }}
{% endif %}
{% for member in default_pool.members %}
{{- realserver_macro(constants, lib_consts, default_pool, member, listener) }}
{% if member.ip_version == ip_version %}
{{- realserver_macro(constants, lib_consts, default_pool, member, listener) }}
{% endif %}
{% endfor %}
{% endif %}
}
{% endif %}
{% endfor %}
{% endif %}
{% endmacro %}

View File

@ -71,6 +71,11 @@ def get_network_driver():
return network_driver
def ip_version(ip_address):
ip = netaddr.IPAddress(ip_address)
return ip.version
def is_ipv4(ip_address):
"""Check if ip address is IPv4 address."""
ip = netaddr.IPAddress(ip_address)

View File

@ -64,10 +64,13 @@ class LoadBalancerFlows(object):
))
lb_create_flow.add(network_tasks.AllocateVIP(
requires=constants.LOADBALANCER,
provides=constants.VIP))
provides=(constants.VIP, constants.ADDITIONAL_VIPS)))
lb_create_flow.add(database_tasks.UpdateVIPAfterAllocation(
requires=(constants.LOADBALANCER_ID, constants.VIP),
provides=constants.LOADBALANCER))
lb_create_flow.add(database_tasks.UpdateAdditionalVIPsAfterAllocation(
requires=(constants.LOADBALANCER_ID, constants.ADDITIONAL_VIPS),
provides=constants.LOADBALANCER))
lb_create_flow.add(network_tasks.UpdateVIPSecurityGroup(
requires=constants.LOADBALANCER_ID))
lb_create_flow.add(network_tasks.GetSubnetFromVIP(
@ -419,12 +422,18 @@ class LoadBalancerFlows(object):
# Check that the VIP port exists and is ok
failover_LB_flow.add(
network_tasks.AllocateVIPforFailover(
requires=constants.LOADBALANCER, provides=constants.VIP))
requires=constants.LOADBALANCER,
provides=(constants.VIP, constants.ADDITIONAL_VIPS)))
# Update the database with the VIP information
failover_LB_flow.add(database_tasks.UpdateVIPAfterAllocation(
requires=(constants.LOADBALANCER_ID, constants.VIP),
provides=constants.LOADBALANCER))
failover_LB_flow.add(
database_tasks.UpdateAdditionalVIPsAfterAllocation(
requires=(constants.LOADBALANCER_ID,
constants.ADDITIONAL_VIPS),
provides=constants.LOADBALANCER))
# Make sure the SG has the correct rules and re-apply to the
# VIP port. It is not used on the VIP port, but will help lock

View File

@ -410,6 +410,28 @@ class UpdateVIPAfterAllocation(BaseDatabaseTask):
id=loadbalancer_id)
class UpdateAdditionalVIPsAfterAllocation(BaseDatabaseTask):
"""Update a VIP associated with a given load balancer."""
def execute(self, loadbalancer_id, additional_vips):
"""Update additional VIPs associated with a given load balancer.
:param loadbalancer_id: Id of a load balancer which VIP should be
updated.
:param additional_vips: data_models.Vip object with update data.
:returns: The load balancer object.
"""
for vip in additional_vips:
LOG.debug('Updating additional VIP: subnet=%(subnet)s '
'ip_address=%(ip)s', {'subnet': vip.subnet_id,
'ip': vip.ip_address})
self.repos.additional_vip.update(
db_apis.get_session(), loadbalancer_id, vip.subnet_id,
ip_address=vip.ip_address, port_id=vip.port_id)
return self.repos.load_balancer.get(db_apis.get_session(),
id=loadbalancer_id)
class UpdateAmphoraeVIPData(BaseDatabaseTask):
"""Update amphorae VIP data."""

View File

@ -573,7 +573,12 @@ class AllocateVIP(BaseNetworkTask):
loadbalancer.vip.port_id,
loadbalancer.vip.subnet_id,
loadbalancer.vip.ip_address)
return self.network_driver.allocate_vip(loadbalancer)
vip, additional_vips = self.network_driver.allocate_vip(loadbalancer)
for add_vip in additional_vips:
LOG.debug('Allocated an additional VIP: subnet=%(subnet)s '
'ip_address=%(ip)s', {'subnet': add_vip.subnet_id,
'ip': add_vip.ip_address})
return vip, additional_vips
def revert(self, result, loadbalancer, *args, **kwargs):
"""Handle a failure to allocate vip."""
@ -581,7 +586,7 @@ class AllocateVIP(BaseNetworkTask):
if isinstance(result, failure.Failure):
LOG.exception("Unable to allocate VIP")
return
vip = result
vip, additional_vips = result
LOG.warning("Deallocating vip %s", vip.ip_address)
try:
self.network_driver.deallocate_vip(vip)

View File

@ -66,10 +66,13 @@ class LoadBalancerFlows(object):
))
lb_create_flow.add(network_tasks.AllocateVIP(
requires=constants.LOADBALANCER,
provides=constants.VIP))
provides=(constants.VIP, constants.ADDITIONAL_VIPS)))
lb_create_flow.add(database_tasks.UpdateVIPAfterAllocation(
requires=(constants.LOADBALANCER_ID, constants.VIP),
provides=constants.LOADBALANCER))
lb_create_flow.add(database_tasks.UpdateAdditionalVIPsAfterAllocation(
requires=(constants.LOADBALANCER_ID, constants.ADDITIONAL_VIPS),
provides=constants.LOADBALANCER))
lb_create_flow.add(network_tasks.UpdateVIPSecurityGroup(
requires=constants.LOADBALANCER_ID))
lb_create_flow.add(network_tasks.GetSubnetFromVIP(
@ -404,12 +407,18 @@ class LoadBalancerFlows(object):
# Check that the VIP port exists and is ok
failover_LB_flow.add(
network_tasks.AllocateVIPforFailover(
requires=constants.LOADBALANCER, provides=constants.VIP))
requires=constants.LOADBALANCER,
provides=(constants.VIP, constants.ADDITIONAL_VIPS)))
# Update the database with the VIP information
failover_LB_flow.add(database_tasks.UpdateVIPAfterAllocation(
requires=(constants.LOADBALANCER_ID, constants.VIP),
provides=constants.LOADBALANCER))
failover_LB_flow.add(
database_tasks.UpdateAdditionalVIPsAfterAllocation(
requires=(constants.LOADBALANCER_ID,
constants.ADDITIONAL_VIPS),
provides=constants.LOADBALANCER))
# Make sure the SG has the correct rules and re-apply to the
# VIP port. It is not used on the VIP port, but will help lock

View File

@ -350,9 +350,25 @@ class AmphoraPostVIPPlug(BaseAmphoraTask):
vip_subnet = data_models.Subnet(**vip_arg)
else:
vip_subnet = data_models.Subnet()
additional_vip_data = []
for add_vip in amphorae_network_config[
amphora[constants.ID]]['additional_vip_data']:
subnet_arg = copy.deepcopy(add_vip['subnet'])
subnet_arg['host_routes'] = [
data_models.HostRoute(**hr)
for hr in subnet_arg['host_routes']]
subnet = data_models.Subnet(**subnet_arg)
additional_vip_data.append(
data_models.AdditionalVipData(
ip_address=add_vip['ip_address'],
subnet=subnet))
self.amphora_driver.post_vip_plug(
db_amp, db_lb, amphorae_network_config, vrrp_port=vrrp_port,
vip_subnet=vip_subnet)
vip_subnet=vip_subnet, additional_vip_data=additional_vip_data)
LOG.debug("Notified amphora of vip plug")
def revert(self, result, amphora, loadbalancer, *args, **kwargs):

View File

@ -449,6 +449,32 @@ class UpdateVIPAfterAllocation(BaseDatabaseTask):
db_lb).to_dict()
class UpdateAdditionalVIPsAfterAllocation(BaseDatabaseTask):
"""Update a VIP associated with a given load balancer."""
def execute(self, loadbalancer_id, additional_vips):
"""Update additional VIPs associated with a given load balancer.
:param loadbalancer_id: Id of a load balancer which VIP should be
updated.
:param additional_vips: data_models.Vip object with update data.
:returns: The load balancer object.
"""
for vip in additional_vips:
LOG.debug('Updating additional VIP: subnet=%(subnet)s '
'ip_address=%(ip)s', {'subnet': vip[constants.SUBNET_ID],
'ip': vip[constants.IP_ADDRESS]})
self.repos.additional_vip.update(
db_apis.get_session(), loadbalancer_id,
vip[constants.SUBNET_ID],
ip_address=vip[constants.IP_ADDRESS],
port_id=vip[constants.PORT_ID])
db_lb = self.repos.load_balancer.get(db_apis.get_session(),
id=loadbalancer_id)
return provider_utils.db_loadbalancer_to_provider_loadbalancer(
db_lb).to_dict()
class UpdateAmphoraeVIPData(BaseDatabaseTask):
"""Update amphorae VIP data."""

View File

@ -607,8 +607,14 @@ class AllocateVIP(BaseNetworkTask):
loadbalancer[constants.VIP_ADDRESS])
db_lb = self.loadbalancer_repo.get(
db_apis.get_session(), id=loadbalancer[constants.LOADBALANCER_ID])
vip = self.network_driver.allocate_vip(db_lb)
return vip.to_dict()
vip, additional_vips = self.network_driver.allocate_vip(db_lb)
for add_vip in additional_vips:
LOG.debug('Allocated an additional VIP: subnet=%(subnet)s '
'ip_address=%(ip)s', {'subnet': add_vip.subnet_id,
'ip': add_vip.ip_address})
return (vip.to_dict(),
[additional_vip.to_dict()
for additional_vip in additional_vips])
def revert(self, result, loadbalancer, *args, **kwargs):
"""Handle a failure to allocate vip."""
@ -616,7 +622,8 @@ class AllocateVIP(BaseNetworkTask):
if isinstance(result, failure.Failure):
LOG.exception("Unable to allocate VIP")
return
vip = data_models.Vip(**result)
vip, additional_vips = result
vip = data_models.Vip(**vip)
LOG.warning("Deallocating vip %s", vip.ip_address)
try:
self.network_driver.deallocate_vip(vip)

View File

@ -51,6 +51,9 @@ class OctaviaBase(models.ModelBase):
return obj.__class__.__name__ + obj.project_id
if obj.__class__.__name__ in ['AvailabilityZone']:
return obj.__class__.__name__ + obj.name
if obj.__class__.__name__ in ['AdditionalVip']:
return (obj.__class__.__name__ +
obj.load_balancer_id + obj.subnet_id)
raise NotImplementedError
def to_data_model(self, _graph_nodes=None):

View File

@ -0,0 +1,44 @@
# Copyright 2019 Verizon Media
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""allow multiple vips per loadbalancer
Revision ID: 31f7653ded67
Revises: b8bd389cbae7
Create Date: 2019-05-04 19:44:22.825499
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '31f7653ded67'
down_revision = 'b8bd389cbae7'
def upgrade():
op.create_table(
u'additional_vip',
sa.Column(u'load_balancer_id', sa.String(36), nullable=False,
index=True),
sa.Column(u'ip_address', sa.String(64), nullable=True),
sa.Column(u'port_id', sa.String(36), nullable=True),
sa.Column(u'subnet_id', sa.String(36), nullable=True),
sa.Column(u'network_id', sa.String(36), nullable=True),
sa.ForeignKeyConstraint([u'load_balancer_id'], [u'load_balancer.id'],
name=u'fk_add_vip_load_balancer_id'),
sa.PrimaryKeyConstraint(u'load_balancer_id', u'subnet_id',
name=u'pk_add_vip_load_balancer_subnet'),
)

View File

@ -405,6 +405,9 @@ class LoadBalancer(base_models.BASE, base_models.IdMixin,
provider = sa.Column(sa.String(64), nullable=True)
vip = orm.relationship('Vip', cascade='delete', uselist=False,
backref=orm.backref('load_balancer', uselist=False))
additional_vips = orm.relationship(
'AdditionalVip', cascade='delete', uselist=True,
backref=orm.backref('load_balancer', uselist=False))
pools = orm.relationship('Pool', cascade='delete', uselist=True,
back_populates="load_balancer")
listeners = orm.relationship('Listener', cascade='delete', uselist=True,
@ -469,6 +472,28 @@ class Vip(base_models.BASE):
octavia_owned = sa.Column(sa.Boolean(), nullable=True)
class AdditionalVip(base_models.BASE):
__data_model__ = data_models.AdditionalVip
__tablename__ = "additional_vip"
__table_args__ = (
sa.PrimaryKeyConstraint('load_balancer_id', 'subnet_id',
name='pk_add_vip_load_balancer_subnet'),
)
load_balancer_id = sa.Column(
sa.String(36),
sa.ForeignKey("load_balancer.id",
name="fk_add_vip_load_balancer_id"),
nullable=False, index=True)
ip_address = sa.Column(sa.String(64), nullable=True)
port_id = sa.Column(sa.String(36), nullable=True)
subnet_id = sa.Column(sa.String(36), nullable=True)
network_id = sa.Column(sa.String(36), nullable=True)
class Listener(base_models.BASE, base_models.IdMixin,
base_models.ProjectMixin, models.TimestampMixin,
base_models.NameMixin, base_models.TagMixin):

View File

@ -210,6 +210,7 @@ class Repositories(object):
def __init__(self):
self.load_balancer = LoadBalancerRepository()
self.vip = VipRepository()
self.additional_vip = AdditionalVipRepository()
self.health_monitor = HealthMonitorRepository()
self.session_persistence = SessionPersistenceRepository()
self.pool = PoolRepository()
@ -231,7 +232,8 @@ class Repositories(object):
self.availability_zone = AvailabilityZoneRepository()
self.availability_zone_profile = AvailabilityZoneProfileRepository()
def create_load_balancer_and_vip(self, session, lb_dict, vip_dict):
def create_load_balancer_and_vip(self, session, lb_dict, vip_dict,
additional_vip_dicts=None):
"""Inserts load balancer and vip entities into the database.
Inserts load balancer and vip entities into the database in one
@ -240,8 +242,10 @@ class Repositories(object):
:param session: A Sql Alchemy database session.
:param lb_dict: Dictionary representation of a load balancer
:param vip_dict: Dictionary representation of a vip
:param additional_vip_dicts: Dict representations of additional vips
:returns: octavia.common.data_models.LoadBalancer
"""
additional_vip_dicts = additional_vip_dicts or []
with session.begin(subtransactions=True):
if not lb_dict.get('id'):
lb_dict['id'] = uuidutils.generate_uuid()
@ -250,6 +254,13 @@ class Repositories(object):
vip_dict['load_balancer_id'] = lb_dict['id']
vip = models.Vip(**vip_dict)
session.add(vip)
for add_vip_dict in additional_vip_dicts:
add_vip_dict['load_balancer_id'] = lb_dict['id']
add_vip_dict['network_id'] = vip_dict.get('network_id')
add_vip_dict['port_id'] = vip_dict.get('port_id')
add_vip = models.AdditionalVip(**add_vip_dict)
session.add(add_vip)
return self.load_balancer.get(session, id=lb.id)
def create_pool_on_load_balancer(self, session, pool_dict,
@ -655,6 +666,7 @@ class Repositories(object):
def create_load_balancer_tree(self, session, lock_session, lb_dict):
listener_dicts = lb_dict.pop('listeners', [])
vip_dict = lb_dict.pop('vip')
additional_vip_dicts = lb_dict.pop('additional_vips', [])
try:
if self.check_quota_met(session,
lock_session,
@ -663,7 +675,7 @@ class Repositories(object):
raise exceptions.QuotaException(
resource=data_models.LoadBalancer._name())
lb_dm = self.create_load_balancer_and_vip(
lock_session, lb_dict, vip_dict)
lock_session, lb_dict, vip_dict, additional_vip_dicts)
for listener_dict in listener_dicts:
# Add listener quota check
if self.check_quota_met(session,
@ -848,6 +860,7 @@ class LoadBalancerRepository(BaseRepository):
# no-load (blank) the tables we don't need
query_options = (
subqueryload(models.LoadBalancer.vip),
subqueryload(models.LoadBalancer.additional_vips),
subqueryload(models.LoadBalancer.amphorae),
subqueryload(models.LoadBalancer.pools),
subqueryload(models.LoadBalancer.listeners),
@ -928,6 +941,21 @@ class VipRepository(BaseRepository):
load_balancer_id=load_balancer_id).update(model_kwargs)
class AdditionalVipRepository(BaseRepository):
model_class = models.AdditionalVip
def update(self, session, load_balancer_id, subnet_id,
**model_kwargs):
"""Updates an additional vip entity in the database.
Uses load_balancer_id + subnet_id.
"""
with session.begin(subtransactions=True):
session.query(self.model_class).filter_by(
load_balancer_id=load_balancer_id,
subnet_id=subnet_id).update(model_kwargs)
class HealthMonitorRepository(BaseRepository):
model_class = models.HealthMonitor

View File

@ -104,7 +104,8 @@ class AbstractNetworkDriver(object, metaclass=abc.ABCMeta):
balancer.
:param load_balancer: octavia.common.data_models.LoadBalancer instance
:return: octavia.common.data_models.VIP
:return: octavia.common.data_models.Vip,
list(octavia.common.data_models.AdditionalVip)
:raises: AllocateVIPException, PortNotFound, SubnetNotFound
"""

View File

@ -97,6 +97,10 @@ class Port(data_models.BaseDataModel):
self.security_group_ids = security_group_ids or []
def get_subnet_id(self, fixed_ip_address):
# TODO(rm_work): We are assuming that we can't have the same IP on
# multiple subnets on the same port, because it wouldn't work properly.
# However, I don't know that we prevent it in the API -- so this might
# exhibit undefined behavior if a user tries to do that.
for fixed_ip in self.fixed_ips:
if fixed_ip.ip_address == fixed_ip_address:
return fixed_ip.subnet_id
@ -135,7 +139,7 @@ class AmphoraNetworkConfig(data_models.BaseDataModel):
def __init__(self, amphora=None, vip_subnet=None, vip_port=None,
vrrp_subnet=None, vrrp_port=None, ha_subnet=None,
ha_port=None):
ha_port=None, additional_vip_data=None):
self.amphora = amphora
self.vip_subnet = vip_subnet
self.vip_port = vip_port
@ -143,6 +147,14 @@ class AmphoraNetworkConfig(data_models.BaseDataModel):
self.vrrp_port = vrrp_port
self.ha_subnet = ha_subnet
self.ha_port = ha_port
self.additional_vip_data = additional_vip_data or []
class AdditionalVipData(data_models.BaseDataModel):
def __init__(self, ip_address=None, subnet=None):
self.ip_address = ip_address
self.subnet = subnet
class HostRoute(data_models.BaseDataModel):

View File

@ -123,14 +123,14 @@ class AllowedAddressPairsDriver(neutron_base.BaseNeutronDriver):
raise base.PlugVIPException(message) from e
return interface
def _add_vip_address_pair(self, port_id, vip_address):
def _add_vip_address_pairs(self, port_id, vip_address_list):
try:
self._add_allowed_address_pair_to_port(port_id, vip_address)
self._add_allowed_address_pairs_to_port(port_id, vip_address_list)
except neutron_client_exceptions.PortNotFoundClient as e:
raise base.PortNotFound(str(e))
except Exception as e:
message = _('Error adding allowed address pair {ip} '
'to port {port_id}.').format(ip=vip_address,
message = _('Error adding allowed address pair(s) {ips} '
'to port {port_id}.').format(ips=vip_address_list,
port_id=port_id)
LOG.exception(message)
raise base.PlugVIPException(message) from e
@ -216,13 +216,19 @@ class AllowedAddressPairsDriver(neutron_base.BaseNeutronDriver):
LOG.info("Security group rule %s not found, will assume "
"it is already deleted.", rule_id)
ethertype = self._get_ethertype_for_ip(load_balancer.vip.ip_address)
ethertypes = set()
primary_ethertype = self._get_ethertype_for_ip(
load_balancer.vip.ip_address)
ethertypes.add(primary_ethertype)
for add_vip in load_balancer.additional_vips:
ethertypes.add(self._get_ethertype_for_ip(add_vip.ip_address))
for port_protocol in add_ports:
self._create_security_group_rule(sec_grp_id, port_protocol[1],
port_min=port_protocol[0],
port_max=port_protocol[0],
ethertype=ethertype,
cidr=port_protocol[2])
for ethertype in ethertypes:
self._create_security_group_rule(sec_grp_id, port_protocol[1],
port_min=port_protocol[0],
port_max=port_protocol[0],
ethertype=ethertype,
cidr=port_protocol[2])
# Currently we are using the VIP network for VRRP
# so we need to open up the protocols for it
@ -232,7 +238,7 @@ class AllowedAddressPairsDriver(neutron_base.BaseNeutronDriver):
sec_grp_id,
constants.VRRP_PROTOCOL_NUM,
direction='ingress',
ethertype=ethertype)
ethertype=primary_ethertype)
except neutron_client_exceptions.Conflict:
# It's ok if this rule already exists
pass
@ -242,7 +248,7 @@ class AllowedAddressPairsDriver(neutron_base.BaseNeutronDriver):
try:
self._create_security_group_rule(
sec_grp_id, constants.AUTH_HEADER_PROTOCOL_NUMBER,
direction='ingress', ethertype=ethertype)
direction='ingress', ethertype=primary_ethertype)
except neutron_client_exceptions.Conflict:
# It's ok if this rule already exists
pass
@ -401,7 +407,11 @@ class AllowedAddressPairsDriver(neutron_base.BaseNeutronDriver):
if not interface:
interface = self._plug_amphora_vip(amphora, subnet)
self._add_vip_address_pair(interface.port_id, vip.ip_address)
aap_address_list = [vip.ip_address]
for add_vip in load_balancer.additional_vips:
aap_address_list.append(add_vip.ip_address)
self._add_vip_address_pairs(interface.port_id, aap_address_list)
if self.sec_grp_enabled:
self._add_vip_security_group_to_port(load_balancer.id,
interface.port_id)
@ -505,6 +515,19 @@ class AllowedAddressPairsDriver(neutron_base.BaseNeutronDriver):
if load_balancer.vip.ip_address:
fixed_ip[constants.IP_ADDRESS] = load_balancer.vip.ip_address
fixed_ips = []
if fixed_ip:
fixed_ips.append(fixed_ip)
for add_vip in load_balancer.additional_vips:
add_ip = {}
if add_vip.subnet_id:
add_ip['subnet_id'] = add_vip.subnet_id
if add_vip.ip_address:
add_ip['ip_address'] = add_vip.ip_address
if add_ip: # TODO(rm_work): Again, could this be empty?
fixed_ips.append(add_ip)
# Make sure we are backward compatible with older neutron
if self._check_extension_enabled(PROJECT_ID_ALIAS):
project_id_key = 'project_id'
@ -520,8 +543,8 @@ class AllowedAddressPairsDriver(neutron_base.BaseNeutronDriver):
constants.DEVICE_OWNER: OCTAVIA_OWNER,
project_id_key: load_balancer.project_id}}
if fixed_ip:
port[constants.PORT][constants.FIXED_IPS] = [fixed_ip]
if fixed_ips:
port[constants.PORT][constants.FIXED_IPS] = fixed_ips
try:
new_port = self.neutron_client.create_port(port)
except Exception as e:
@ -693,9 +716,12 @@ class AllowedAddressPairsDriver(neutron_base.BaseNeutronDriver):
return plugged_interface
def _get_amp_net_configs(self, amp, amp_configs, vip_subnet, vip_port):
def _get_amp_net_configs(self, amp, amp_configs, vip_subnet, vip_port,
additional_vips):
if amp.status != constants.DELETED:
LOG.debug("Retrieving network details for amphora %s", amp.id)
LOG.debug('Called with ADDITIONAL_VIPS: %(vips)s',
{'vips': additional_vips})
vrrp_port = self.get_port(amp.vrrp_port_id)
vrrp_subnet = self.get_subnet(
vrrp_port.get_subnet_id(amp.vrrp_ip))
@ -704,6 +730,17 @@ class AllowedAddressPairsDriver(neutron_base.BaseNeutronDriver):
ha_subnet = self.get_subnet(
ha_port.get_subnet_id(amp.ha_ip))
additional_vip_data = []
for add_vip in additional_vips:
add_vip_subnet = self.get_subnet(add_vip.subnet_id)
add_vip_data = n_data_models.AdditionalVipData(
ip_address=add_vip.ip_address,
subnet=add_vip_subnet
)
additional_vip_data.append(add_vip_data)
LOG.debug('Processed ADDITIONAL VIPS and got: %(vips)s',
{'vips': additional_vip_data})
amp_configs[amp.id] = n_data_models.AmphoraNetworkConfig(
amphora=amp,
vip_subnet=vip_subnet,
@ -711,21 +748,26 @@ class AllowedAddressPairsDriver(neutron_base.BaseNeutronDriver):
vrrp_subnet=vrrp_subnet,
vrrp_port=vrrp_port,
ha_subnet=ha_subnet,
ha_port=ha_port
ha_port=ha_port,
additional_vip_data=additional_vip_data
)
def get_network_configs(self, loadbalancer, amphora=None):
vip_subnet = self.get_subnet(loadbalancer.vip.subnet_id)
vip_port = self.get_port(loadbalancer.vip.port_id)
amp_configs = {}
LOG.debug('Loadbalancer has ADDITIONAL VIPS: %(vips)s',
{'vips': loadbalancer.additional_vips})
if amphora:
self._get_amp_net_configs(amphora, amp_configs,
vip_subnet, vip_port)
vip_subnet, vip_port,
loadbalancer.additional_vips)
else:
for amp in loadbalancer.amphorae:
try:
self._get_amp_net_configs(amp, amp_configs,
vip_subnet, vip_port)
vip_subnet, vip_port,
loadbalancer.additional_vips)
except Exception as e:
LOG.warning('Getting network configurations for amphora '
'%(amp)s failed due to %(err)s.',

View File

@ -73,24 +73,41 @@ class BaseNeutronDriver(base.AbstractNetworkDriver):
def _port_to_vip(self, port, load_balancer, octavia_owned=False):
fixed_ip = None
additional_ips = []
for port_fixed_ip in port.fixed_ips:
if port_fixed_ip.subnet_id == load_balancer.vip.subnet_id:
LOG.debug('Found fixed_ip: subnet=%(subnet)s '
'ip_address=%(ip)s', {'subnet': port_fixed_ip.subnet_id,
'ip': port_fixed_ip.ip_address})
if (not fixed_ip and
port_fixed_ip.subnet_id == load_balancer.vip.subnet_id):
fixed_ip = port_fixed_ip
break
else:
additional_ips.append(port_fixed_ip)
if fixed_ip:
return data_models.Vip(ip_address=fixed_ip.ip_address,
subnet_id=fixed_ip.subnet_id,
network_id=port.network_id,
port_id=port.id,
load_balancer=load_balancer,
load_balancer_id=load_balancer.id,
octavia_owned=octavia_owned)
return data_models.Vip(ip_address=None, subnet_id=None,
network_id=port.network_id,
port_id=port.id,
load_balancer=load_balancer,
load_balancer_id=load_balancer.id,
octavia_owned=octavia_owned)
primary_vip = data_models.Vip(ip_address=fixed_ip.ip_address,
subnet_id=fixed_ip.subnet_id,
network_id=port.network_id,
port_id=port.id,
load_balancer=load_balancer,
load_balancer_id=load_balancer.id,
octavia_owned=octavia_owned)
else:
primary_vip = data_models.Vip(ip_address=None, subnet_id=None,
network_id=port.network_id,
port_id=port.id,
load_balancer=load_balancer,
load_balancer_id=load_balancer.id,
octavia_owned=octavia_owned)
additional_vips = [
data_models.AdditionalVip(
ip_address=add_fixed_ip.ip_address,
subnet_id=add_fixed_ip.subnet_id,
network_id=port.network_id,
port_id=port.id,
load_balancer=load_balancer,
load_balancer_id=load_balancer.id)
for add_fixed_ip in additional_ips]
return primary_vip, additional_vips
def _nova_interface_to_octavia_interface(self, compute_id, nova_interface):
fixed_ips = [utils.convert_fixed_ip_dict_to_model(fixed_ip)
@ -108,11 +125,11 @@ class BaseNeutronDriver(base.AbstractNetworkDriver):
port_id=port['id'],
fixed_ips=fixed_ips)
def _add_allowed_address_pair_to_port(self, port_id, ip_address):
def _add_allowed_address_pairs_to_port(self, port_id, ip_address_list):
aap = {
'port': {
'allowed_address_pairs': [
{'ip_address': ip_address}
{'ip_address': ip} for ip in ip_address_list
]
}
}

View File

@ -21,8 +21,13 @@ from octavia.network import data_models as network_models
LOG = logging.getLogger(__name__)
_PLUGGED_NETWORKS = {}
_PORTS = {}
_NOOP_MANAGER_VARS = {
'networks': {},
'subnets': {},
'ports': {},
'interfaces': {},
'current_network': None
}
class NoopManager(object):
@ -46,11 +51,21 @@ class NoopManager(object):
network_id = loadbalancer.vip.network_id or network_id
port_id = loadbalancer.vip.port_id or port_id
ip_address = loadbalancer.vip.ip_address or ip_address
return data_models.Vip(ip_address=ip_address,
subnet_id=subnet_id,
network_id=network_id,
port_id=port_id,
load_balancer_id=loadbalancer.id)
return_vip = data_models.Vip(ip_address=ip_address,
subnet_id=subnet_id,
network_id=network_id,
port_id=port_id,
load_balancer_id=loadbalancer.id)
additional_vips = [
data_models.AdditionalVip(
ip_address=add_vip.ip_address,
subnet_id=add_vip.subnet_id,
network_id=network_id,
port_id=port_id,
load_balancer=loadbalancer,
load_balancer_id=loadbalancer.id)
for add_vip in loadbalancer.additional_vips]
return return_vip, additional_vips
def deallocate_vip(self, vip):
LOG.debug("Network %s no-op, deallocate_vip vip %s",
@ -128,10 +143,12 @@ class NoopManager(object):
subnet_id=subnet_id)],
port_id=uuidutils.generate_uuid()
)
_PORTS[interface.port_id] = network_models.Port(
id=interface.port_id,
network_id=network_id)
_PLUGGED_NETWORKS[(network_id, compute_id)] = interface
_NOOP_MANAGER_VARS['ports'][interface.port_id] = (
network_models.Port(
id=interface.port_id,
network_id=network_id))
_NOOP_MANAGER_VARS['interfaces'][(network_id, compute_id)] = (
interface)
return interface
def unplug_network(self, compute_id, network_id):
@ -140,14 +157,14 @@ class NoopManager(object):
self.__class__.__name__, compute_id, network_id)
self.networkconfigconfig[(compute_id, network_id)] = (
compute_id, network_id, 'unplug_network')
_PLUGGED_NETWORKS.pop((network_id, compute_id), None)
_NOOP_MANAGER_VARS['interfaces'].pop((network_id, compute_id), None)
def get_plugged_networks(self, compute_id):
LOG.debug("Network %s no-op, get_plugged_networks amphora_id %s",
self.__class__.__name__, compute_id)
self.networkconfigconfig[compute_id] = (
compute_id, 'get_plugged_networks')
return [pn for pn in _PLUGGED_NETWORKS.values()
return [pn for pn in _NOOP_MANAGER_VARS['interfaces'].values()
if pn.compute_id == compute_id]
def update_vip(self, loadbalancer, for_delete=False):
@ -161,57 +178,114 @@ class NoopManager(object):
LOG.debug("Network %s no-op, get_network network_id %s",
self.__class__.__name__, network_id)
self.networkconfigconfig[network_id] = (network_id, 'get_network')
network = network_models.Network(id=uuidutils.generate_uuid(),
if network_id in _NOOP_MANAGER_VARS['networks']:
return _NOOP_MANAGER_VARS['networks'][network_id]
network = network_models.Network(id=network_id,
port_security_enabled=True)
class ItIsInsideMe(network_models.Subnet):
class ItIsInsideMe(list):
known_subnets = None
def __init__(self, network, parent):
super().__init__()
self.network = network
self.parent = parent
self.known_subnets = {}
def to_dict(self, **kwargs):
return [{}]
def __contains__(self, item):
self.known_subnets[item] = self.parent.get_subnet(item)
self.known_subnets[item].network_id = self.network.id
return True
def __iter__(self):
yield uuidutils.generate_uuid()
def __len__(self):
return len(self.known_subnets) + 1
network.subnets = ItIsInsideMe()
def __iter__(self):
for subnet in self.known_subnets:
yield subnet
subnet = network_models.Subnet(id=uuidutils.generate_uuid(),
network_id=self.network.id)
self.known_subnets[subnet.id] = subnet
_NOOP_MANAGER_VARS['subnets'][subnet.id] = subnet
yield subnet.id
network.subnets = ItIsInsideMe(network, self)
_NOOP_MANAGER_VARS['networks'][network_id] = network
_NOOP_MANAGER_VARS['current_network'] = network_id
return network
def get_subnet(self, subnet_id):
LOG.debug("Subnet %s no-op, get_subnet subnet_id %s",
self.__class__.__name__, subnet_id)
self.networkconfigconfig[subnet_id] = (subnet_id, 'get_subnet')
return network_models.Subnet(id=uuidutils.generate_uuid())
if subnet_id in _NOOP_MANAGER_VARS['subnets']:
return _NOOP_MANAGER_VARS['subnets'][subnet_id]
subnet = network_models.Subnet(
id=subnet_id,
network_id=_NOOP_MANAGER_VARS['current_network'])
_NOOP_MANAGER_VARS['subnets'][subnet_id] = subnet
return subnet
def get_port(self, port_id):
LOG.debug("Port %s no-op, get_port port_id %s",
self.__class__.__name__, port_id)
self.networkconfigconfig[port_id] = (port_id, 'get_port')
if port_id in _PORTS:
return _PORTS[port_id]
return network_models.Port(id=uuidutils.generate_uuid(),
network_id=uuidutils.generate_uuid())
if port_id in _NOOP_MANAGER_VARS['ports']:
return _NOOP_MANAGER_VARS['ports'][port_id]
port = network_models.Port(id=port_id)
_NOOP_MANAGER_VARS['ports'][port_id] = port
return port
def get_network_by_name(self, network_name):
LOG.debug("Network %s no-op, get_network_by_name network_name %s",
self.__class__.__name__, network_name)
self.networkconfigconfig[network_name] = (network_name,
'get_network_by_name')
return network_models.Network(id=uuidutils.generate_uuid(),
port_security_enabled=True)
by_name = {n.name: n for n in _NOOP_MANAGER_VARS['networks'].values()}
if network_name in by_name:
return by_name[network_name]
network = network_models.Network(id=uuidutils.generate_uuid(),
port_security_enabled=True,
name=network_name)
_NOOP_MANAGER_VARS['networks'][network.id] = network
_NOOP_MANAGER_VARS['current_network'] = network.id
return network
def get_subnet_by_name(self, subnet_name):
LOG.debug("Subnet %s no-op, get_subnet_by_name subnet_name %s",
self.__class__.__name__, subnet_name)
self.networkconfigconfig[subnet_name] = (subnet_name,
'get_subnet_by_name')
return network_models.Subnet(id=uuidutils.generate_uuid())
by_name = {s.name: s for s in _NOOP_MANAGER_VARS['subnets'].values()}
if subnet_name in by_name:
return by_name[subnet_name]
subnet = network_models.Subnet(
id=uuidutils.generate_uuid(),
name=subnet_name,
network_id=_NOOP_MANAGER_VARS['current_network'])
_NOOP_MANAGER_VARS['subnets'][subnet.id] = subnet
return subnet
def get_port_by_name(self, port_name):
LOG.debug("Port %s no-op, get_port_by_name port_name %s",
self.__class__.__name__, port_name)
self.networkconfigconfig[port_name] = (port_name, 'get_port_by_name')
return network_models.Port(id=uuidutils.generate_uuid())
by_name = {p.name: p for p in _NOOP_MANAGER_VARS['ports'].values()}
if port_name in by_name:
return by_name[port_name]
port = network_models.Port(id=uuidutils.generate_uuid(),
name=port_name)
_NOOP_MANAGER_VARS['ports'][port.id] = port
return port
def get_port_by_net_id_device_id(self, network_id, device_id):
LOG.debug("Port %s no-op, get_port_by_net_id_device_id network_id %s"
@ -219,7 +293,16 @@ class NoopManager(object):
self.__class__.__name__, network_id, device_id)
self.networkconfigconfig[(network_id, device_id)] = (
network_id, device_id, 'get_port_by_net_id_device_id')
return network_models.Port(id=uuidutils.generate_uuid())
by_net_dev_id = {(p.network_id, p.device_id): p
for p in _NOOP_MANAGER_VARS['ports'].values()}
if (network_id, device_id) in by_net_dev_id:
return by_net_dev_id[(network_id, device_id)]
port = network_models.Port(id=uuidutils.generate_uuid(),
network_id=network_id,
device_id=device_id)
_NOOP_MANAGER_VARS['ports'][port.id] = port
return port
def get_security_group(self, sg_name):
LOG.debug("Network %s no-op, get_security_group name %s",
@ -298,8 +381,7 @@ class NoopManager(object):
ip_avail = network_models.Network_IP_Availability(
network_id=network.id)
subnet_ip_availability = []
network.subnets = list(network.subnets)
for subnet_id in network.subnets:
for subnet_id in list(network.subnets):
subnet_ip_availability.append({'subnet_id': subnet_id,
'used_ips': 0, 'total_ips': 254})
ip_avail.subnet_ip_availability = subnet_ip_availability
@ -361,7 +443,7 @@ class NoopManager(object):
port = network_models.Port(id=port_id,
network_id=uuidutils.generate_uuid())
_PORTS[port.id] = port
_NOOP_MANAGER_VARS['ports'][port.id] = port
return port
def unplug_fixed_ip(self, port_id, subnet_id):
@ -371,7 +453,7 @@ class NoopManager(object):
self.networkconfigconfig[(port_id, subnet_id)] = (
port_id, subnet_id, 'unplug_fixed_ip')
return _PORTS.get(port_id)
return _NOOP_MANAGER_VARS['ports'].get(port_id)
class NoopNetworkDriver(driver_base.AbstractNetworkDriver):

View File

@ -215,6 +215,7 @@ MOCK_MANAGEMENT_PORT2 = {'port': {'network_id': MOCK_MANAGEMENT_NET_ID,
'fixed_ips': MOCK_MANAGEMENT_FIXED_IPS2}}
MOCK_VIP_SUBNET_ID = 'vip-subnet-1'
MOCK_VIP_SUBNET_ID2 = 'vip-subnet-2'
MOCK_VIP_NET_ID = 'vip-net-1'
MOCK_VRRP_PORT_ID1 = 'vrrp-port-1'
MOCK_VRRP_PORT_ID2 = 'vrrp-port-2'

View File

@ -17,10 +17,11 @@ from octavia.common import data_models
from octavia.tests.common import constants as ut_constants
def generate_load_balancer_tree():
def generate_load_balancer_tree(additional_vips=None):
vip = generate_vip()
amps = [generate_amphora(), generate_amphora()]
lb = generate_load_balancer(vip=vip, amphorae=amps)
lb = generate_load_balancer(vip=vip, amphorae=amps,
additional_vips=additional_vips)
return lb
@ -28,8 +29,10 @@ LB_SEED = 0
def generate_load_balancer(vip=None, amphorae=None,
topology=constants.TOPOLOGY_SINGLE):
topology=constants.TOPOLOGY_SINGLE,
additional_vips=None):
amphorae = amphorae or []
additional_vips = additional_vips or []
global LB_SEED
LB_SEED += 1
lb = data_models.LoadBalancer(id='lb{0}-id'.format(LB_SEED),
@ -46,6 +49,16 @@ def generate_load_balancer(vip=None, amphorae=None,
if vip:
vip.load_balancer = lb
vip.load_balancer_id = lb.id
for add_vip in additional_vips:
add_vip_obj = data_models.AdditionalVip(
load_balancer_id=lb.id,
ip_address=add_vip.get('ip_address'),
subnet_id=add_vip.get('subnet_id'),
network_id=vip.network_id,
port_id=vip.port_id,
load_balancer=lb
)
lb.additional_vips.append(add_vip_obj)
return lb

View File

@ -29,6 +29,7 @@ class SampleDriverDataModels(object):
self.project_id = uuidutils.generate_uuid()
self.lb_id = uuidutils.generate_uuid()
self.ip_address = '192.0.2.30'
self.ip_address2 = '192.0.2.31'
self.port_id = uuidutils.generate_uuid()
self.network_id = uuidutils.generate_uuid()
self.subnet_id = uuidutils.generate_uuid()
@ -597,6 +598,11 @@ class SampleDriverDataModels(object):
lib_consts.VIP_QOS_POLICY_ID: self.qos_policy_id,
constants.OCTAVIA_OWNED: None}
self.provider_additional_vip_dicts = [
{'ip_address': self.ip_address2,
'subnet_id': self.subnet_id}
]
self.db_vip = data_models.Vip(
ip_address=self.ip_address,
network_id=self.network_id,
@ -635,7 +641,7 @@ class SampleDriverDataModels(object):
lib_consts.VIP_SUBNET_ID: self.subnet_id}
self.provider_loadbalancer_tree_dict = {
lib_consts.ADDITIONAL_VIPS: None,
lib_consts.ADDITIONAL_VIPS: [],
lib_consts.ADMIN_STATE_UP: True,
lib_consts.AVAILABILITY_ZONE: None,
lib_consts.DESCRIPTION: self.lb_description,

View File

@ -955,6 +955,24 @@ class TestServerTestCase(base.TestCase):
handle.write.assert_any_call(octavia_utils.b('TestT'))
handle.write.assert_any_call(octavia_utils.b('est'))
def _check_centos_files(self, handle):
handle.write.assert_any_call(
'\n# Generated by Octavia agent\n'
'#!/bin/bash\n'
'if [[ "$1" != "lo" ]]\n'
' then\n'
' /usr/local/bin/lvs-masquerade.sh add ipv4 $1\n'
' /usr/local/bin/lvs-masquerade.sh add ipv6 $1\n'
'fi')
handle.write.assert_any_call(
'\n# Generated by Octavia agent\n'
'#!/bin/bash\n'
'if [[ "$1" != "lo" ]]\n'
' then\n'
' /usr/local/bin/lvs-masquerade.sh delete ipv4 $1\n'
' /usr/local/bin/lvs-masquerade.sh delete ipv6 $1\n'
'fi')
def test_ubuntu_plug_network(self):
self._test_plug_network(consts.UBUNTU)
@ -1478,6 +1496,7 @@ class TestServerTestCase(base.TestCase):
def test_ubuntu_plug_VIP4(self):
self._test_plug_VIP4(consts.UBUNTU)
def test_centos_plug_VIP4(self):
self._test_plug_VIP4(consts.CENTOS)
@mock.patch('os.chmod')
@ -1727,19 +1746,6 @@ class TestServerTestCase(base.TestCase):
'amphora-interface', 'up',
consts.NETNS_PRIMARY_INTERFACE], stderr=-2)
# Verify sysctl was loaded
calls = [mock.call('amphora-haproxy', ['/sbin/sysctl', '--system'],
stdout=subprocess.PIPE),
mock.call('amphora-haproxy', ['modprobe', 'ip_vs'],
stdout=subprocess.PIPE),
mock.call('amphora-haproxy',
['/sbin/sysctl', '-w', 'net.ipv4.ip_forward=1'],
stdout=subprocess.PIPE),
mock.call('amphora-haproxy',
['/sbin/sysctl', '-w', 'net.ipv4.vs.conntrack=1'],
stdout=subprocess.PIPE)]
mock_nspopen.assert_has_calls(calls, any_order=True)
# One Interface down, Happy Path IPv4
mode = stat.S_IRUSR | stat.S_IWUSR | stat.S_IRGRP | stat.S_IROTH
@ -2091,20 +2097,6 @@ class TestServerTestCase(base.TestCase):
'amphora-interface', 'up', '{netns_int}'.format(
netns_int=consts.NETNS_PRIMARY_INTERFACE)], stderr=-2)
# Verify sysctl was loaded
calls = [mock.call('amphora-haproxy', ['/sbin/sysctl', '--system'],
stdout=subprocess.PIPE),
mock.call('amphora-haproxy', ['modprobe', 'ip_vs'],
stdout=subprocess.PIPE),
mock.call('amphora-haproxy',
['/sbin/sysctl', '-w',
'net.ipv6.conf.all.forwarding=1'],
stdout=subprocess.PIPE),
mock.call('amphora-haproxy',
['/sbin/sysctl', '-w', 'net.ipv4.vs.conntrack=1'],
stdout=subprocess.PIPE)]
mock_nspopen.assert_has_calls(calls, any_order=True)
# One Interface down, Happy Path IPv6
flags = os.O_WRONLY | os.O_CREAT | os.O_TRUNC
mode = stat.S_IRUSR | stat.S_IWUSR | stat.S_IRGRP | stat.S_IROTH
@ -2220,6 +2212,431 @@ class TestServerTestCase(base.TestCase):
'message': 'Error plugging VIP'},
jsonutils.loads(rv.data.decode('utf-8')))
def test_ubuntu_plug_VIP_with_additional_VIP6(self):
self._test_plug_VIP_with_additional_VIP6(consts.UBUNTU)
def test_centos_plug_VIP_with_additional_VIP6(self):
self._test_plug_VIP_with_additional_VIP6(consts.CENTOS)
@mock.patch('os.chmod')
@mock.patch('shutil.copy2')
@mock.patch('pyroute2.NSPopen', create=True)
@mock.patch('octavia.amphorae.backends.agent.api_server.'
'plug.Plug._netns_interface_exists')
@mock.patch('pyroute2.IPRoute', create=True)
@mock.patch('pyroute2.netns.create', create=True)
@mock.patch('pyroute2.NetNS', create=True)
@mock.patch('subprocess.check_output')
@mock.patch('shutil.copytree')
@mock.patch('os.makedirs')
@mock.patch('os.path.isfile')
def _test_plug_VIP_with_additional_VIP6(self, distro, mock_isfile,
mock_makedirs, mock_copytree,
mock_check_output, mock_netns,
mock_netns_create, mock_pyroute2,
mock_int_exists, mock_nspopen,
mock_copy2, mock_os_chmod):
mock_ipr = mock.MagicMock()
mock_ipr_instance = mock.MagicMock()
mock_ipr_instance.link_lookup.return_value = [33]
mock_ipr_instance.get_links.return_value = ({
'attrs': [('IFLA_IFNAME', FAKE_INTERFACE)]},)
mock_ipr.__enter__.return_value = mock_ipr_instance
mock_pyroute2.return_value = mock_ipr
mock_isfile.return_value = True
mock_int_exists.return_value = False
self.assertIn(distro, [consts.UBUNTU, consts.CENTOS])
# Happy Path IPv4 with IPv6 additional VIP, with VRRP_IP and host
# route
full_subnet_info = {
'subnet_cidr': '203.0.113.0/24',
'gateway': '203.0.113.1',
'mac_address': '123',
'vrrp_ip': '203.0.113.4',
'mtu': 1450,
'host_routes': [{'destination': '203.0.114.0/24',
'nexthop': '203.0.113.5'},
{'destination': '203.0.115.1/32',
'nexthop': '203.0.113.5'}],
'additional_vips': [
{'subnet_cidr': '2001:db8::/32',
'gateway': '2001:db8::1',
'ip_address': '2001:db8::4',
'host_routes': [{'destination': '2001:db9::/32',
'nexthop': '2001:db8::5'},
{'destination': '2001:db9::1/128',
'nexthop': '2001:db8::5'}]
},
],
}
flags = os.O_WRONLY | os.O_CREAT | os.O_TRUNC
mode = stat.S_IRUSR | stat.S_IWUSR | stat.S_IRGRP | stat.S_IROTH
file_name = ('/etc/octavia/interfaces/{netns_int}.json'.format(
netns_int=consts.NETNS_PRIMARY_INTERFACE))
m = self.useFixture(test_utils.OpenFixture(file_name)).mock_open
with mock.patch('os.open') as mock_open, mock.patch.object(
os, 'fdopen', m) as mock_fdopen, mock.patch(
'octavia.amphorae.backends.utils.interface_file.'
'InterfaceFile.dump') as mock_dump:
mock_open.return_value = 123
if distro == consts.UBUNTU:
rv = self.ubuntu_app.post('/' + api_server.VERSION +
"/plug/vip/203.0.113.2",
content_type='application/json',
data=jsonutils.dumps(
full_subnet_info))
elif distro == consts.CENTOS:
rv = self.centos_app.post('/' + api_server.VERSION +
"/plug/vip/203.0.113.2",
content_type='application/json',
data=jsonutils.dumps(
full_subnet_info))
self.assertEqual(202, rv.status_code)
mock_open.assert_any_call(file_name, flags, mode)
mock_fdopen.assert_any_call(123, 'w')
plug_inf_file = '/var/lib/octavia/plugged_interfaces'
flags = os.O_RDWR | os.O_CREAT
mode = stat.S_IRUSR | stat.S_IWUSR | stat.S_IRGRP | stat.S_IROTH
mock_open.assert_any_call(plug_inf_file, flags, mode)
mock_fdopen.assert_any_call(123, 'r+')
expected_dict = {
consts.NAME: consts.NETNS_PRIMARY_INTERFACE,
consts.MTU: 1450,
consts.ADDRESSES: [
{
consts.ADDRESS: '203.0.113.4',
consts.PREFIXLEN: 24
}, {
consts.ADDRESS: '203.0.113.2',
consts.PREFIXLEN: 24
}, {
consts.ADDRESS: '2001:db8::4',
consts.PREFIXLEN: 32
}
],
consts.ROUTES: [
{
consts.DST: '0.0.0.0/0',
consts.GATEWAY: '203.0.113.1',
consts.FLAGS: [consts.ONLINK]
}, {
consts.DST: '0.0.0.0/0',
consts.GATEWAY: '203.0.113.1',
consts.TABLE: 1,
consts.FLAGS: [consts.ONLINK]
}, {
consts.DST: '203.0.113.0/24',
consts.PREFSRC: '203.0.113.2',
consts.SCOPE: 'link',
consts.TABLE: 1
}, {
consts.DST: '203.0.114.0/24',
consts.GATEWAY: '203.0.113.5'
}, {
consts.DST: '203.0.115.1/32',
consts.GATEWAY: '203.0.113.5'
}, {
consts.DST: '203.0.114.0/24',
consts.GATEWAY: '203.0.113.5',
consts.TABLE: 1
}, {
consts.DST: '203.0.115.1/32',
consts.GATEWAY: '203.0.113.5',
consts.TABLE: 1
}, {
consts.DST: '::/0',
consts.GATEWAY: '2001:db8::1',
consts.FLAGS: [consts.ONLINK]
}, {
consts.DST: '::/0',
consts.GATEWAY: '2001:db8::1',
consts.FLAGS: [consts.ONLINK],
consts.TABLE: 1
}, {
consts.DST: '2001:db8::/32',
consts.PREFSRC: '2001:db8::4',
consts.SCOPE: 'link',
consts.TABLE: 1
}, {
consts.DST: '2001:db9::/32',
consts.GATEWAY: '2001:db8::5'
}, {
consts.DST: '2001:db9::1/128',
consts.GATEWAY: '2001:db8::5'
}, {
consts.DST: '2001:db9::/32',
consts.GATEWAY: '2001:db8::5',
consts.TABLE: 1
}, {
consts.DST: '2001:db9::1/128',
consts.GATEWAY: '2001:db8::5',
consts.TABLE: 1
}
],
consts.RULES: [
{
consts.SRC: '203.0.113.2',
consts.SRC_LEN: 32,
consts.TABLE: 1
},
{
consts.SRC: '2001:db8::4',
consts.SRC_LEN: 128,
consts.TABLE: 1
}
],
consts.SCRIPTS: {
consts.IFACE_UP: [{
consts.COMMAND: (
"/usr/local/bin/lvs-masquerade.sh add ipv4 "
"{}".format(consts.NETNS_PRIMARY_INTERFACE))
}, {
consts.COMMAND: (
"/usr/local/bin/lvs-masquerade.sh add ipv6 "
"{}".format(consts.NETNS_PRIMARY_INTERFACE))
}],
consts.IFACE_DOWN: [{
consts.COMMAND: (
"/usr/local/bin/lvs-masquerade.sh delete ipv4 "
"{}".format(consts.NETNS_PRIMARY_INTERFACE))
}, {
consts.COMMAND: (
"/usr/local/bin/lvs-masquerade.sh delete ipv6 "
"{}".format(consts.NETNS_PRIMARY_INTERFACE))
}]
}
}
mock_dump.assert_called_once()
args = mock_dump.mock_calls[0][1]
test_utils.assert_interface_files_equal(
self, args[0], expected_dict)
mock_check_output.assert_called_with(
['ip', 'netns', 'exec', consts.AMPHORA_NAMESPACE,
'amphora-interface', 'up',
consts.NETNS_PRIMARY_INTERFACE], stderr=-2)
def test_ubuntu_plug_VIP6_with_additional_VIP(self):
self._test_plug_VIP6_with_additional_VIP(consts.UBUNTU)
def test_centos_plug_VIP6_with_additional_VIP(self):
self._test_plug_VIP6_with_additional_VIP(consts.CENTOS)
@mock.patch('os.chmod')
@mock.patch('shutil.copy2')
@mock.patch('pyroute2.NSPopen', create=True)
@mock.patch('octavia.amphorae.backends.agent.api_server.'
'plug.Plug._netns_interface_exists')
@mock.patch('pyroute2.IPRoute', create=True)
@mock.patch('pyroute2.netns.create', create=True)
@mock.patch('pyroute2.NetNS', create=True)
@mock.patch('subprocess.check_output')
@mock.patch('shutil.copytree')
@mock.patch('os.makedirs')
@mock.patch('os.path.isfile')
def _test_plug_VIP6_with_additional_VIP(self, distro, mock_isfile,
mock_makedirs, mock_copytree,
mock_check_output, mock_netns,
mock_netns_create, mock_pyroute2,
mock_int_exists, mock_nspopen,
mock_copy2, mock_os_chmod):
mock_ipr = mock.MagicMock()
mock_ipr_instance = mock.MagicMock()
mock_ipr_instance.link_lookup.return_value = [33]
mock_ipr_instance.get_links.return_value = ({
'attrs': [('IFLA_IFNAME', FAKE_INTERFACE)]},)
mock_ipr.__enter__.return_value = mock_ipr_instance
mock_pyroute2.return_value = mock_ipr
mock_isfile.return_value = True
mock_int_exists.return_value = False
self.assertIn(distro, [consts.UBUNTU, consts.CENTOS])
# Happy Path IPv6 with IPv4 additional VIP, with VRRP_IP and host
# route
full_subnet_info = {
'subnet_cidr': '2001:db8::/32',
'gateway': '2001:db8::1',
'vrrp_ip': '2001:db8::4',
'host_routes': [{'destination': '2001:db9::/32',
'nexthop': '2001:db8::5'},
{'destination': '2001:db9::1/128',
'nexthop': '2001:db8::5'}],
'mac_address': '123',
'mtu': 1450,
'additional_vips': [
{'subnet_cidr': '203.0.113.0/24',
'gateway': '203.0.113.1',
'ip_address': '203.0.113.4',
'host_routes': [{'destination': '203.0.114.0/24',
'nexthop': '203.0.113.5'},
{'destination': '203.0.115.1/32',
'nexthop': '203.0.113.5'}],
},
],
}
flags = os.O_WRONLY | os.O_CREAT | os.O_TRUNC
mode = stat.S_IRUSR | stat.S_IWUSR | stat.S_IRGRP | stat.S_IROTH
file_name = ('/etc/octavia/interfaces/{netns_int}.json'.format(
netns_int=consts.NETNS_PRIMARY_INTERFACE))
m = self.useFixture(test_utils.OpenFixture(file_name)).mock_open
with mock.patch('os.open') as mock_open, mock.patch.object(
os, 'fdopen', m) as mock_fdopen, mock.patch(
'octavia.amphorae.backends.utils.interface_file.'
'InterfaceFile.dump') as mock_dump:
mock_open.return_value = 123
if distro == consts.UBUNTU:
rv = self.ubuntu_app.post('/' + api_server.VERSION +
"/plug/vip/2001:db8::2",
content_type='application/json',
data=jsonutils.dumps(
full_subnet_info))
elif distro == consts.CENTOS:
rv = self.centos_app.post('/' + api_server.VERSION +
"/plug/vip/2001:db8::2",
content_type='application/json',
data=jsonutils.dumps(
full_subnet_info))
self.assertEqual(202, rv.status_code)
mock_open.assert_any_call(file_name, flags, mode)
mock_fdopen.assert_any_call(123, 'w')
plug_inf_file = '/var/lib/octavia/plugged_interfaces'
flags = os.O_RDWR | os.O_CREAT
mode = stat.S_IRUSR | stat.S_IWUSR | stat.S_IRGRP | stat.S_IROTH
mock_open.assert_any_call(plug_inf_file, flags, mode)
mock_fdopen.assert_any_call(123, 'r+')
expected_dict = {
consts.NAME: consts.NETNS_PRIMARY_INTERFACE,
consts.MTU: 1450,
consts.ADDRESSES: [
{
consts.ADDRESS: '2001:db8::4',
consts.PREFIXLEN: 32
}, {
consts.ADDRESS: '2001:db8::2',
consts.PREFIXLEN: 32
}, {
consts.ADDRESS: '203.0.113.4',
consts.PREFIXLEN: 24
}
],
consts.ROUTES: [
{
consts.DST: '::/0',
consts.GATEWAY: '2001:db8::1',
consts.FLAGS: [consts.ONLINK]
}, {
consts.DST: '::/0',
consts.GATEWAY: '2001:db8::1',
consts.FLAGS: [consts.ONLINK],
consts.TABLE: 1
}, {
consts.DST: '2001:db8::/32',
consts.PREFSRC: '2001:db8::2',
consts.SCOPE: 'link',
consts.TABLE: 1
}, {
consts.DST: '2001:db9::/32',
consts.GATEWAY: '2001:db8::5'
}, {
consts.DST: '2001:db9::1/128',
consts.GATEWAY: '2001:db8::5'
}, {
consts.DST: '2001:db9::/32',
consts.GATEWAY: '2001:db8::5',
consts.TABLE: 1
}, {
consts.DST: '2001:db9::1/128',
consts.GATEWAY: '2001:db8::5',
consts.TABLE: 1
}, {
consts.DST: '0.0.0.0/0',
consts.GATEWAY: '203.0.113.1',
consts.FLAGS: [consts.ONLINK]
}, {
consts.DST: '0.0.0.0/0',
consts.GATEWAY: '203.0.113.1',
consts.TABLE: 1,
consts.FLAGS: [consts.ONLINK]
}, {
consts.DST: '203.0.113.0/24',
consts.PREFSRC: '203.0.113.4',
consts.SCOPE: 'link',
consts.TABLE: 1
}, {
consts.DST: '203.0.114.0/24',
consts.GATEWAY: '203.0.113.5'
}, {
consts.DST: '203.0.115.1/32',
consts.GATEWAY: '203.0.113.5'
}, {
consts.DST: '203.0.114.0/24',
consts.GATEWAY: '203.0.113.5',
consts.TABLE: 1
}, {
consts.DST: '203.0.115.1/32',
consts.GATEWAY: '203.0.113.5',
consts.TABLE: 1
}
],
consts.RULES: [
{
consts.SRC: '2001:db8::2',
consts.SRC_LEN: 128,
consts.TABLE: 1
}, {
consts.SRC: '203.0.113.4',
consts.SRC_LEN: 32,
consts.TABLE: 1
},
],
consts.SCRIPTS: {
consts.IFACE_UP: [{
consts.COMMAND: (
"/usr/local/bin/lvs-masquerade.sh add ipv4 "
"{}".format(consts.NETNS_PRIMARY_INTERFACE))
}, {
consts.COMMAND: (
"/usr/local/bin/lvs-masquerade.sh add ipv6 "
"{}".format(consts.NETNS_PRIMARY_INTERFACE))
}],
consts.IFACE_DOWN: [{
consts.COMMAND: (
"/usr/local/bin/lvs-masquerade.sh delete ipv4 "
"{}".format(consts.NETNS_PRIMARY_INTERFACE))
}, {
consts.COMMAND: (
"/usr/local/bin/lvs-masquerade.sh delete ipv6 "
"{}".format(consts.NETNS_PRIMARY_INTERFACE))
}]
}
}
mock_dump.assert_called_once()
args = mock_dump.mock_calls[0][1]
test_utils.assert_interface_files_equal(
self, args[0], expected_dict)
mock_check_output.assert_called_with(
['ip', 'netns', 'exec', consts.AMPHORA_NAMESPACE,
'amphora-interface', 'up',
consts.NETNS_PRIMARY_INTERFACE], stderr=-2)
def test_ubuntu_get_interface(self):
self._test_get_interface(consts.UBUNTU)

View File

@ -45,7 +45,7 @@ class TestRootController(base_db_test.OctaviaDBTestBase):
def test_api_versions(self):
versions = self._get_versions_with_config()
version_ids = tuple(v.get('id') for v in versions)
self.assertEqual(25, len(version_ids))
self.assertEqual(26, len(version_ids))
self.assertIn('v2.0', version_ids)
self.assertIn('v2.1', version_ids)
self.assertIn('v2.2', version_ids)
@ -71,6 +71,7 @@ class TestRootController(base_db_test.OctaviaDBTestBase):
self.assertIn('v2.22', version_ids)
self.assertIn('v2.23', version_ids)
self.assertIn('v2.24', version_ids)
self.assertIn('v2.25', version_ids)
# Each version should have a 'self' 'href' to the API version URL
# [{u'rel': u'self', u'href': u'http://localhost/v2'}]

View File

@ -27,6 +27,7 @@ from octavia.common import constants
import octavia.common.context
from octavia.common import data_models
from octavia.common import exceptions
from octavia.common import utils
from octavia.network import base as network_base
from octavia.network import data_models as network_models
from octavia.tests.functional.api.v2 import base
@ -64,6 +65,33 @@ class TestLoadBalancer(base.BaseAPITest):
api_list = response.json.get(self.root_tag_list)
self.assertEqual([], api_list)
def _test_create_noop(self, **optionals):
self.conf.config(group='controller_worker',
network_driver='network_noop_driver')
self.conf.config(group='controller_worker',
compute_driver='compute_noop_driver')
self.conf.config(group='controller_worker',
amphora_driver='amphora_noop_driver')
lb_json = {'name': 'test_noop',
'project_id': self.project_id
}
lb_json.update(optionals)
body = self._build_body(lb_json)
response = self.post(self.LBS_PATH, body)
api_lb = response.json.get(self.root_tag)
self._assert_request_matches_response(lb_json, api_lb)
return api_lb
def test_create_noop_subnet_only(self):
self._test_create_noop(vip_subnet_id=uuidutils.generate_uuid())
def test_create_noop_network_only(self):
self._test_create_noop(vip_network_id=uuidutils.generate_uuid())
def test_create_noop_network_and_subnet(self):
self._test_create_noop(vip_network_id=uuidutils.generate_uuid(),
vip_subnet_id=uuidutils.generate_uuid())
def test_create(self, **optionals):
lb_json = {'name': 'test1',
'vip_subnet_id': uuidutils.generate_uuid(),
@ -196,7 +224,7 @@ class TestLoadBalancer(base.BaseAPITest):
"octavia.network.drivers.noop_driver.driver.NoopManager"
".get_subnet") as mock_get_subnet:
mock_get_network.return_value = network
mock_get_subnet.side_effect = [subnet1, subnet2]
mock_get_subnet.side_effect = [subnet1, subnet2, subnet2]
response = self.post(self.LBS_PATH, body)
api_lb = response.json.get(self.root_tag)
self._assert_request_matches_response(lb_json, api_lb)
@ -332,7 +360,7 @@ class TestLoadBalancer(base.BaseAPITest):
"octavia.network.drivers.noop_driver.driver.NoopManager"
".get_subnet") as mock_get_subnet:
mock_get_network.return_value = network
mock_get_subnet.side_effect = [subnet1, subnet2]
mock_get_subnet.side_effect = [subnet1, subnet2, subnet2]
response = self.post(self.LBS_PATH, body)
api_lb = response.json.get(self.root_tag)
self._assert_request_matches_response(lb_json, api_lb)
@ -401,7 +429,7 @@ class TestLoadBalancer(base.BaseAPITest):
self.assertEqual(ip_address, api_lb.get('vip_address'))
# Note: This test is using the unique local address range to
# validate that we handle a fully expaned IP address properly.
# validate that we handle a fully expanded IP address properly.
# This is not possible with the documentation/testnet range.
def test_create_with_vip_network_and_address_full_ipv6(self):
ip_address = 'fdff:ffff:ffff:ffff:ffff:ffff:ffff:ffff'
@ -436,8 +464,9 @@ class TestLoadBalancer(base.BaseAPITest):
def test_create_with_vip_port_1_fixed_ip(self):
ip_address = '198.51.100.1'
subnet = network_models.Subnet(id=uuidutils.generate_uuid())
network = network_models.Network(id=uuidutils.generate_uuid(),
subnet = network_models.Subnet(id=uuidutils.generate_uuid(),
network_id=uuidutils.generate_uuid())
network = network_models.Network(id=subnet.network_id,
subnets=[subnet])
fixed_ip = network_models.FixedIP(subnet_id=subnet.id,
ip_address=ip_address)
@ -457,10 +486,13 @@ class TestLoadBalancer(base.BaseAPITest):
"octavia.network.drivers.noop_driver.driver.NoopManager"
".get_port") as mock_get_port, mock.patch(
"octavia.api.drivers.noop_driver.driver.NoopManager."
"create_vip_port") as mock_provider:
"create_vip_port") as mock_provider, mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager."
"get_subnet") as mock_get_subnet:
mock_get_network.return_value = network
mock_get_port.return_value = port
mock_provider.side_effect = lib_exceptions.NotImplementedError()
mock_get_subnet.return_value = subnet
response = self.post(self.LBS_PATH, body)
api_lb = response.json.get(self.root_tag)
self._assert_request_matches_response(lb_json, api_lb)
@ -501,8 +533,9 @@ class TestLoadBalancer(base.BaseAPITest):
def test_create_with_vip_port_and_address(self):
ip_address = '198.51.100.1'
subnet = network_models.Subnet(id=uuidutils.generate_uuid())
network = network_models.Network(id=uuidutils.generate_uuid(),
subnet = network_models.Subnet(id=uuidutils.generate_uuid(),
network_id=uuidutils.generate_uuid())
network = network_models.Network(id=subnet.network_id,
subnets=[subnet])
fixed_ip = network_models.FixedIP(subnet_id=subnet.id,
ip_address=ip_address)
@ -518,9 +551,12 @@ class TestLoadBalancer(base.BaseAPITest):
"octavia.network.drivers.noop_driver.driver.NoopManager"
".get_network") as mock_get_network, mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager"
".get_port") as mock_get_port:
".get_port") as mock_get_port, mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager."
"get_subnet") as mock_get_subnet:
mock_get_network.return_value = network
mock_get_port.return_value = port
mock_get_subnet.return_value = subnet
response = self.post(self.LBS_PATH, body)
api_lb = response.json.get(self.root_tag)
self._assert_request_matches_response(lb_json, api_lb)
@ -558,8 +594,9 @@ class TestLoadBalancer(base.BaseAPITest):
self.assertEqual(err_msg, response.json.get('faultstring'))
def test_create_with_vip_full(self):
subnet = network_models.Subnet(id=uuidutils.generate_uuid())
network = network_models.Network(id=uuidutils.generate_uuid(),
subnet = network_models.Subnet(id=uuidutils.generate_uuid(),
network_id=uuidutils.generate_uuid())
network = network_models.Network(id=subnet.network_id,
subnets=[subnet])
port = network_models.Port(id=uuidutils.generate_uuid(),
network_id=network.id)
@ -573,9 +610,12 @@ class TestLoadBalancer(base.BaseAPITest):
"octavia.network.drivers.noop_driver.driver.NoopManager"
".get_network") as mock_get_network, mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager"
".get_port") as mock_get_port:
".get_port") as mock_get_port, mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager."
"get_subnet") as mock_get_subnet:
mock_get_network.return_value = network
mock_get_port.return_value = port
mock_get_subnet.return_value = subnet
response = self.post(self.LBS_PATH, body)
api_lb = response.json.get(self.root_tag)
self._assert_request_matches_response(lb_json, api_lb)
@ -584,6 +624,59 @@ class TestLoadBalancer(base.BaseAPITest):
self.assertEqual(network.id, api_lb.get('vip_network_id'))
self.assertEqual(port.id, api_lb.get('vip_port_id'))
def test_create_with_multiple_vips(self):
subnet1 = network_models.Subnet(id=uuidutils.generate_uuid(),
cidr='10.0.0.0/24',
ip_version=4,
network_id=uuidutils.generate_uuid())
subnet2 = network_models.Subnet(id=uuidutils.generate_uuid(),
cidr='fc00::/7',
ip_version=6,
network_id=subnet1.network_id)
subnet3 = network_models.Subnet(id=uuidutils.generate_uuid(),
cidr='10.1.0.0/24',
ip_version=4,
network_id=subnet1.network_id)
network = network_models.Network(id=subnet1.network_id,
subnets=[subnet1, subnet2, subnet3])
port = network_models.Port(id=uuidutils.generate_uuid(),
network_id=network.id)
lb_json = {
'name': 'test1', 'description': 'test1_desc',
'vip_address': '10.0.0.1', 'vip_subnet_id': subnet1.id,
'vip_network_id': network.id, 'vip_port_id': port.id,
'project_id': self.project_id,
'additional_vips': [
{'subnet_id': subnet2.id,
'ip_address': 'fdff:ffff:ffff:ffff:ffff:ffff:ffff:ffff'},
{'subnet_id': subnet3.id,
'ip_address': '10.1.0.1'},
],
}
body = self._build_body(lb_json)
with mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager"
".get_network") as mock_get_network, mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager"
".get_port") as mock_get_port, mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager"
".get_subnet") as mock_get_subnet:
mock_get_network.return_value = network
mock_get_port.return_value = port
mock_get_subnet.side_effect = [subnet1, subnet2, subnet3, subnet1]
response = self.post(self.LBS_PATH, body)
api_lb = response.json.get(self.root_tag)
self._assert_request_matches_response(lb_json, api_lb)
self.assertEqual('10.0.0.1', api_lb.get('vip_address'))
self.assertEqual(subnet1.id, api_lb.get('vip_subnet_id'))
self.assertEqual(network.id, api_lb.get('vip_network_id'))
self.assertEqual(
# Sort by ip_address so the list order will be guaranteed
sorted(lb_json['additional_vips'], key=lambda x: x['ip_address']),
sorted(api_lb['additional_vips'], key=lambda x: x['ip_address']))
def test_create_neutron_failure(self):
class TestNeutronException(network_base.AllocateVIPException):
@ -595,8 +688,9 @@ class TestLoadBalancer(base.BaseAPITest):
def __str__(self):
return repr(self.message)
subnet = network_models.Subnet(id=uuidutils.generate_uuid())
network = network_models.Network(id=uuidutils.generate_uuid(),
subnet = network_models.Subnet(id=uuidutils.generate_uuid(),
network_id=uuidutils.generate_uuid())
network = network_models.Network(id=subnet.network_id,
subnets=[subnet])
port = network_models.Port(id=uuidutils.generate_uuid(),
network_id=network.id)
@ -616,12 +710,15 @@ class TestLoadBalancer(base.BaseAPITest):
"octavia.network.drivers.noop_driver.driver.NoopManager"
".allocate_vip") as mock_allocate_vip, mock.patch(
"octavia.api.drivers.noop_driver.driver.NoopManager."
"create_vip_port") as mock_provider:
"create_vip_port") as mock_provider, mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager"
".get_subnet") as mock_get_subnet:
mock_get_network.return_value = network
mock_get_port.return_value = port
mock_allocate_vip.side_effect = TestNeutronException(
"octavia_msg", "neutron_msg", 409)
mock_provider.side_effect = lib_exceptions.NotImplementedError()
mock_get_subnet.return_value = subnet
response = self.post(self.LBS_PATH, body, status=409)
# Make sure the faultstring contains the neutron error and not
# the octavia error message
@ -654,7 +751,7 @@ class TestLoadBalancer(base.BaseAPITest):
network_id=uuidutils.generate_uuid())
port_qos_policy_id = uuidutils.generate_uuid()
ip_address = '192.168.50.50'
network = network_models.Network(id=uuidutils.generate_uuid(),
network = network_models.Network(id=subnet.network_id,
subnets=[subnet])
fixed_ip = network_models.FixedIP(subnet_id=subnet.id,
ip_address=ip_address)
@ -673,13 +770,16 @@ class TestLoadBalancer(base.BaseAPITest):
"octavia.network.drivers.noop_driver.driver.NoopManager"
".allocate_vip") as mock_allocate_vip, mock.patch(
"octavia.common.validate."
"qos_policy_exists") as m_get_qos:
"qos_policy_exists") as m_get_qos, mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager."
"get_subnet") as mock_get_subnet:
m_get_qos.return_value = port_qos_policy_id
mock_allocate_vip.return_value = data_models.Vip(
ip_address=ip_address, subnet_id=subnet.id,
network_id=network.id, port_id=port.id)
m_get_network.return_value = network
mock_get_port.return_value = port
mock_get_subnet.return_value = subnet
response = self.post(self.LBS_PATH, body)
api_lb = response.json.get(self.root_tag)
self._assert_request_matches_response(lb_json, api_lb)
@ -695,7 +795,7 @@ class TestLoadBalancer(base.BaseAPITest):
port_qos_policy_id = uuidutils.generate_uuid()
new_qos_policy_id = uuidutils.generate_uuid()
ip_address = '192.168.50.50'
network = network_models.Network(id=uuidutils.generate_uuid(),
network = network_models.Network(id=subnet.network_id,
subnets=[subnet])
fixed_ip = network_models.FixedIP(subnet_id=subnet.id,
ip_address=ip_address)
@ -715,13 +815,16 @@ class TestLoadBalancer(base.BaseAPITest):
"octavia.network.drivers.noop_driver.driver.NoopManager"
".allocate_vip") as mock_allocate_vip, mock.patch(
"octavia.common.validate."
"qos_policy_exists") as m_get_qos:
"qos_policy_exists") as m_get_qos, mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager."
"get_subnet") as mock_get_subnet:
m_get_qos.return_value = mock.ANY
mock_allocate_vip.return_value = data_models.Vip(
ip_address=ip_address, subnet_id=subnet.id,
network_id=network.id, port_id=port.id)
m_get_network.return_value = network
mock_get_port.return_value = port
mock_get_subnet.return_value = subnet
response = self.post(self.LBS_PATH, body)
api_lb = response.json.get(self.root_tag)
self._assert_request_matches_response(lb_json, api_lb)
@ -873,7 +976,6 @@ class TestLoadBalancer(base.BaseAPITest):
self._assert_request_matches_response(lb_json, api_lb)
def test_create_authorized(self, **optionals):
self.conf = self.useFixture(oslo_fixture.Config(cfg.CONF))
auth_strategy = self.conf.conf.api_settings.get('auth_strategy')
self.conf.config(group='api_settings', auth_strategy=constants.TESTING)
project_id = uuidutils.generate_uuid()
@ -907,7 +1009,6 @@ class TestLoadBalancer(base.BaseAPITest):
self._assert_request_matches_response(lb_json, api_lb)
def test_create_not_authorized(self, **optionals):
self.conf = self.useFixture(oslo_fixture.Config(cfg.CONF))
auth_strategy = self.conf.conf.api_settings.get('auth_strategy')
self.conf.config(group='api_settings', auth_strategy=constants.TESTING)
lb_json = {'name': 'test1',
@ -1305,7 +1406,6 @@ class TestLoadBalancer(base.BaseAPITest):
name='lb2', project_id=project_id)
self.create_load_balancer(uuidutils.generate_uuid(),
name='lb3', project_id=project_id)
self.conf = self.useFixture(oslo_fixture.Config(cfg.CONF))
auth_strategy = self.conf.conf.api_settings.get('auth_strategy')
self.conf.config(group='api_settings', auth_strategy=constants.TESTING)
LB_PROJECT_PATH = '{}?project_id={}'.format(self.LBS_PATH, project_id)
@ -1693,8 +1793,9 @@ class TestLoadBalancer(base.BaseAPITest):
def test_get(self):
project_id = uuidutils.generate_uuid()
subnet = network_models.Subnet(id=uuidutils.generate_uuid())
network = network_models.Network(id=uuidutils.generate_uuid(),
subnet = network_models.Subnet(id=uuidutils.generate_uuid(),
network_id=uuidutils.generate_uuid())
network = network_models.Network(id=subnet.network_id,
subnets=[subnet])
port = network_models.Port(id=uuidutils.generate_uuid(),
network_id=network.id)
@ -1702,9 +1803,12 @@ class TestLoadBalancer(base.BaseAPITest):
"octavia.network.drivers.noop_driver.driver.NoopManager"
".get_network") as mock_get_network, mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager"
".get_port") as mock_get_port:
".get_port") as mock_get_port, mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager."
"get_subnet") as mock_get_subnet:
mock_get_network.return_value = network
mock_get_port.return_value = port
mock_get_subnet.return_value = subnet
lb = self.create_load_balancer(subnet.id,
vip_address='10.0.0.1',
@ -1744,8 +1848,9 @@ class TestLoadBalancer(base.BaseAPITest):
def test_get_authorized(self):
project_id = uuidutils.generate_uuid()
subnet = network_models.Subnet(id=uuidutils.generate_uuid())
network = network_models.Network(id=uuidutils.generate_uuid(),
subnet = network_models.Subnet(id=uuidutils.generate_uuid(),
network_id=uuidutils.generate_uuid())
network = network_models.Network(id=subnet.network_id,
subnets=[subnet])
port = network_models.Port(id=uuidutils.generate_uuid(),
network_id=network.id)
@ -1753,9 +1858,12 @@ class TestLoadBalancer(base.BaseAPITest):
"octavia.network.drivers.noop_driver.driver.NoopManager"
".get_network") as mock_get_network, mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager"
".get_port") as mock_get_port:
".get_port") as mock_get_port, mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager."
"get_subnet") as mock_get_subnet:
mock_get_network.return_value = network
mock_get_port.return_value = port
mock_get_subnet.return_value = subnet
lb = self.create_load_balancer(subnet.id,
vip_address='10.0.0.1',
@ -1766,7 +1874,6 @@ class TestLoadBalancer(base.BaseAPITest):
description='desc1',
admin_state_up=False)
lb_dict = lb.get(self.root_tag)
self.conf = self.useFixture(oslo_fixture.Config(cfg.CONF))
auth_strategy = self.conf.conf.api_settings.get('auth_strategy')
self.conf.config(group='api_settings', auth_strategy=constants.TESTING)
with mock.patch.object(octavia.common.context.Context, 'project_id',
@ -1801,8 +1908,9 @@ class TestLoadBalancer(base.BaseAPITest):
def test_get_not_authorized(self):
project_id = uuidutils.generate_uuid()
subnet = network_models.Subnet(id=uuidutils.generate_uuid())
network = network_models.Network(id=uuidutils.generate_uuid(),
subnet = network_models.Subnet(id=uuidutils.generate_uuid(),
network_id=uuidutils.generate_uuid())
network = network_models.Network(id=subnet.network_id,
subnets=[subnet])
port = network_models.Port(id=uuidutils.generate_uuid(),
network_id=network.id)
@ -1810,9 +1918,12 @@ class TestLoadBalancer(base.BaseAPITest):
"octavia.network.drivers.noop_driver.driver.NoopManager"
".get_network") as mock_get_network, mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager"
".get_port") as mock_get_port:
".get_port") as mock_get_port, mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager."
"get_subnet") as mock_get_subnet:
mock_get_network.return_value = network
mock_get_port.return_value = port
mock_get_subnet.return_value = subnet
lb = self.create_load_balancer(subnet.id,
vip_address='10.0.0.1',
@ -1823,7 +1934,6 @@ class TestLoadBalancer(base.BaseAPITest):
description='desc1',
admin_state_up=False)
lb_dict = lb.get(self.root_tag)
self.conf = self.useFixture(oslo_fixture.Config(cfg.CONF))
auth_strategy = self.conf.conf.api_settings.get('auth_strategy')
self.conf.config(group='api_settings', auth_strategy=constants.TESTING)
with mock.patch.object(octavia.common.context.Context, 'project_id',
@ -1891,7 +2001,7 @@ class TestLoadBalancer(base.BaseAPITest):
admin_state_up=False)
lb_dict = lb.get(self.root_tag)
lb_json = self._build_body({'vip_subnet_id': '1234'})
lb = self.set_lb_status(lb_dict.get('id'))
self.set_lb_status(lb_dict.get('id'))
self.put(self.LB_PATH.format(lb_id=lb_dict.get('id')),
lb_json, status=400)
@ -1960,9 +2070,8 @@ class TestLoadBalancer(base.BaseAPITest):
admin_state_up=False)
lb_dict = lb.get(self.root_tag)
lb_json = self._build_body({'name': 'lb2'})
lb = self.set_lb_status(lb_dict.get('id'))
self.set_lb_status(lb_dict.get('id'))
self.conf = self.useFixture(oslo_fixture.Config(cfg.CONF))
auth_strategy = self.conf.conf.api_settings.get('auth_strategy')
self.conf.config(group='api_settings', auth_strategy=constants.TESTING)
with mock.patch.object(octavia.common.context.Context, 'project_id',
@ -2006,9 +2115,8 @@ class TestLoadBalancer(base.BaseAPITest):
admin_state_up=False)
lb_dict = lb.get(self.root_tag)
lb_json = self._build_body({'name': 'lb2'})
lb = self.set_lb_status(lb_dict.get('id'))
self.set_lb_status(lb_dict.get('id'))
self.conf = self.useFixture(oslo_fixture.Config(cfg.CONF))
auth_strategy = self.conf.conf.api_settings.get('auth_strategy')
self.conf.config(group='api_settings', auth_strategy=constants.TESTING)
with mock.patch.object(octavia.common.context.Context, 'project_id',
@ -2146,7 +2254,6 @@ class TestLoadBalancer(base.BaseAPITest):
admin_state_up=False)
lb_dict = lb.get(self.root_tag)
lb = self.set_lb_status(lb_dict.get('id'))
self.conf = self.useFixture(oslo_fixture.Config(cfg.CONF))
auth_strategy = self.conf.conf.api_settings.get('auth_strategy')
self.conf.config(group='api_settings', auth_strategy=constants.TESTING)
with mock.patch.object(octavia.common.context.Context, 'project_id',
@ -2188,8 +2295,7 @@ class TestLoadBalancer(base.BaseAPITest):
description='desc1',
admin_state_up=False)
lb_dict = lb.get(self.root_tag)
lb = self.set_lb_status(lb_dict.get('id'))
self.conf = self.useFixture(oslo_fixture.Config(cfg.CONF))
self.set_lb_status(lb_dict.get('id'))
auth_strategy = self.conf.conf.api_settings.get('auth_strategy')
self.conf.config(group='api_settings', auth_strategy=constants.TESTING)
with mock.patch.object(octavia.common.context.Context, 'project_id',
@ -2310,11 +2416,10 @@ class TestLoadBalancer(base.BaseAPITest):
description='desc1',
admin_state_up=False)
lb_dict = lb.get(self.root_tag)
lb = self.set_lb_status(lb_dict.get('id'))
self.set_lb_status(lb_dict.get('id'))
path = self._get_full_path(self.LB_PATH.format(
lb_id=lb_dict.get('id')) + "/failover")
self.conf = self.useFixture(oslo_fixture.Config(cfg.CONF))
auth_strategy = self.conf.conf.api_settings.get('auth_strategy')
self.conf.config(group='api_settings', auth_strategy=constants.TESTING)
with mock.patch.object(octavia.common.context.Context, 'project_id',
@ -2331,11 +2436,10 @@ class TestLoadBalancer(base.BaseAPITest):
description='desc1',
admin_state_up=False)
lb_dict = lb.get(self.root_tag)
lb = self.set_lb_status(lb_dict.get('id'))
self.set_lb_status(lb_dict.get('id'))
path = self._get_full_path(self.LB_PATH.format(
lb_id=lb_dict.get('id')) + "/failover")
self.conf = self.useFixture(oslo_fixture.Config(cfg.CONF))
auth_strategy = self.conf.conf.api_settings.get('auth_strategy')
self.conf.config(group='api_settings', auth_strategy=constants.TESTING)
with mock.patch.object(octavia.common.context.Context, 'project_id',
@ -2369,11 +2473,10 @@ class TestLoadBalancer(base.BaseAPITest):
description='desc1',
admin_state_up=False)
lb_dict = lb.get(self.root_tag)
lb = self.set_lb_status(lb_dict.get('id'))
self.set_lb_status(lb_dict.get('id'))
path = self._get_full_path(self.LB_PATH.format(
lb_id=lb_dict.get('id')) + "/failover")
self.conf = self.useFixture(oslo_fixture.Config(cfg.CONF))
auth_strategy = self.conf.conf.api_settings.get('auth_strategy')
self.conf.config(group='api_settings', auth_strategy=constants.TESTING)
with mock.patch.object(octavia.common.context.Context, 'project_id',
@ -2406,11 +2509,10 @@ class TestLoadBalancer(base.BaseAPITest):
description='desc1',
admin_state_up=False)
lb_dict = lb.get(self.root_tag)
lb = self.set_lb_status(lb_dict.get('id'))
self.set_lb_status(lb_dict.get('id'))
path = self._get_full_path(self.LB_PATH.format(
lb_id=lb_dict.get('id')) + "/failover")
self.conf = self.useFixture(oslo_fixture.Config(cfg.CONF))
auth_strategy = self.conf.conf.api_settings.get('auth_strategy')
self.conf.config(group='api_settings', auth_strategy=constants.NOAUTH)
with mock.patch.object(octavia.common.context.Context, 'project_id',
@ -2462,6 +2564,7 @@ class TestLoadBalancer(base.BaseAPITest):
@mock.patch('octavia.api.drivers.utils.call_provider')
def test_update_with_bad_provider(self, mock_provider):
mock_provider.return_value = (mock.MagicMock(), [])
api_lb = self.create_load_balancer(
uuidutils.generate_uuid()).get(self.root_tag)
self.set_lb_status(lb_id=api_lb.get('id'))
@ -2475,6 +2578,7 @@ class TestLoadBalancer(base.BaseAPITest):
@mock.patch('octavia.api.drivers.utils.call_provider')
def test_delete_with_bad_provider(self, mock_provider):
mock_provider.return_value = (mock.MagicMock(), [])
api_lb = self.create_load_balancer(
uuidutils.generate_uuid()).get(self.root_tag)
self.set_lb_status(lb_id=api_lb.get('id'))
@ -2505,6 +2609,7 @@ class TestLoadBalancer(base.BaseAPITest):
@mock.patch('octavia.api.drivers.utils.call_provider')
def test_update_with_provider_not_implemented(self, mock_provider):
mock_provider.return_value = (mock.MagicMock(), [])
api_lb = self.create_load_balancer(
uuidutils.generate_uuid()).get(self.root_tag)
self.set_lb_status(lb_id=api_lb.get('id'))
@ -2518,6 +2623,7 @@ class TestLoadBalancer(base.BaseAPITest):
@mock.patch('octavia.api.drivers.utils.call_provider')
def test_delete_with_provider_not_implemented(self, mock_provider):
mock_provider.return_value = (mock.MagicMock(), [])
api_lb = self.create_load_balancer(
uuidutils.generate_uuid()).get(self.root_tag)
self.set_lb_status(lb_id=api_lb.get('id'))
@ -2548,6 +2654,7 @@ class TestLoadBalancer(base.BaseAPITest):
@mock.patch('octavia.api.drivers.utils.call_provider')
def test_update_with_provider_unsupport_option(self, mock_provider):
mock_provider.return_value = (mock.MagicMock(), [])
api_lb = self.create_load_balancer(
uuidutils.generate_uuid()).get(self.root_tag)
self.set_lb_status(lb_id=api_lb.get('id'))
@ -2561,6 +2668,7 @@ class TestLoadBalancer(base.BaseAPITest):
@mock.patch('octavia.api.drivers.utils.call_provider')
def test_delete_with_provider_unsupport_option(self, mock_provider):
mock_provider.return_value = (mock.MagicMock(), [])
api_lb = self.create_load_balancer(
uuidutils.generate_uuid()).get(self.root_tag)
self.set_lb_status(lb_id=api_lb.get('id'))
@ -2603,6 +2711,9 @@ class TestLoadBalancerGraph(base.BaseAPITest):
observed_listeners = observed_graph_copy.pop('listeners', [])
expected_pools = expected_graph.pop('pools', [])
observed_pools = observed_graph_copy.pop('pools', [])
expected_additional_vips = expected_graph.pop('additional_vips', [])
observed_additional_vips = observed_graph_copy.pop('additional_vips',
[])
self.assertEqual(expected_graph, observed_graph_copy)
self.assertEqual(len(expected_pools), len(observed_pools))
@ -2660,9 +2771,15 @@ class TestLoadBalancerGraph(base.BaseAPITest):
l7rule.pop('tenant_id'))
self.assertTrue(l7rule.pop('id'))
self.assertIn(observed_listener, expected_listeners)
self.assertEqual(len(expected_additional_vips),
len(observed_additional_vips))
for observed_add_vip in observed_additional_vips:
if not observed_add_vip['ip_address']:
del observed_add_vip['ip_address']
self.assertIn(observed_add_vip, expected_additional_vips)
def _get_lb_bodies(self, create_listeners, expected_listeners,
create_pools=None):
create_pools=None, additional_vips=None):
create_lb = {
'name': 'lb1',
'project_id': self._project_id,
@ -2673,6 +2790,8 @@ class TestLoadBalancerGraph(base.BaseAPITest):
'listeners': create_listeners,
'pools': create_pools or []
}
if additional_vips:
create_lb.update({'additional_vips': additional_vips})
expected_lb = {
'description': '',
'admin_state_up': True,
@ -2999,6 +3118,23 @@ class TestLoadBalancerGraph(base.BaseAPITest):
expected_l7rules[0].update(create_l7rules[0])
return create_l7rules, expected_l7rules
def test_with_additional_vips(self):
create_lb, expected_lb = self._get_lb_bodies(
[], [], additional_vips=[
{'subnet_id': uuidutils.generate_uuid()}])
# Pre-populate test subnet/network data
network_driver = utils.get_network_driver()
vip_subnet = network_driver.get_subnet(create_lb['vip_subnet_id'])
additional_subnet = network_driver.get_subnet(
create_lb['additional_vips'][0]['subnet_id'])
additional_subnet.network_id = vip_subnet.network_id
body = self._build_body(create_lb)
response = self.post(self.LBS_PATH, body)
api_lb = response.json.get(self.root_tag)
self._assert_graphs_equal(expected_lb, api_lb)
def test_with_one_listener(self):
create_listener, expected_listener = self._get_listener_bodies()
create_lb, expected_lb = self._get_lb_bodies([create_listener],
@ -3784,7 +3920,6 @@ class TestLoadBalancerGraph(base.BaseAPITest):
uuidutils.generate_uuid(),
project_id=project_id).get('loadbalancer')
self.conf = self.useFixture(oslo_fixture.Config(cfg.CONF))
auth_strategy = self.conf.conf.api_settings.get('auth_strategy')
self.conf.config(group='api_settings', auth_strategy=constants.TESTING)
@ -3819,7 +3954,6 @@ class TestLoadBalancerGraph(base.BaseAPITest):
def test_statuses_not_authorized(self):
lb = self.create_load_balancer(
uuidutils.generate_uuid()).get('loadbalancer')
self.conf = self.useFixture(oslo_fixture.Config(cfg.CONF))
auth_strategy = self.conf.conf.api_settings.get('auth_strategy')
self.conf.config(group='api_settings', auth_strategy=constants.TESTING)
with mock.patch.object(octavia.common.context.Context, 'project_id',
@ -3885,7 +4019,6 @@ class TestLoadBalancerGraph(base.BaseAPITest):
total_connections=random.randint(1, 9),
request_errors=random.randint(1, 9))
self.conf = self.useFixture(oslo_fixture.Config(cfg.CONF))
auth_strategy = self.conf.conf.api_settings.get('auth_strategy')
self.conf.config(group='api_settings', auth_strategy=constants.TESTING)
@ -3932,7 +4065,6 @@ class TestLoadBalancerGraph(base.BaseAPITest):
bytes_in=random.randint(1, 9),
bytes_out=random.randint(1, 9),
total_connections=random.randint(1, 9))
self.conf = self.useFixture(oslo_fixture.Config(cfg.CONF))
auth_strategy = self.conf.conf.api_settings.get('auth_strategy')
self.conf.config(group='api_settings', auth_strategy=constants.TESTING)
with mock.patch.object(octavia.common.context.Context, 'project_id',

View File

@ -121,7 +121,8 @@ class AllRepositoriesTest(base.OctaviaDBTestBase):
'amphorahealth', 'vrrpgroup', 'l7rule', 'l7policy',
'amp_build_slots', 'amp_build_req', 'quotas',
'flavor', 'flavor_profile', 'listener_cidr',
'availability_zone', 'availability_zone_profile')
'availability_zone', 'availability_zone_profile',
'additional_vip')
for repo_attr in repo_attr_names:
single_repo = getattr(self.repos, repo_attr, None)
message = ("Class Repositories should have %s instance"
@ -159,9 +160,13 @@ class AllRepositoriesTest(base.OctaviaDBTestBase):
'subnet_id': uuidutils.generate_uuid(),
'network_id': uuidutils.generate_uuid(),
'qos_policy_id': None, 'octavia_owned': True}
lb_dm = self.repos.create_load_balancer_and_vip(self.session, lb, vip)
additional_vips = [{'subnet_id': uuidutils.generate_uuid(),
'ip_address': '192.0.2.2'}]
lb_dm = self.repos.create_load_balancer_and_vip(self.session, lb, vip,
additional_vips)
lb_dm_dict = lb_dm.to_dict()
del lb_dm_dict['vip']
del lb_dm_dict['additional_vips']
del lb_dm_dict['listeners']
del lb_dm_dict['amphorae']
del lb_dm_dict['pools']

View File

@ -121,10 +121,11 @@ class TestOSUtils(base.TestCase):
DEST2 = u'203.0.113.0/24'
NEXTHOP = u'192.0.2.1'
MTU = 1450
FIXED_IP_IPV6 = u'2001:0db8:0000:0000:0000:0000:0000:0001'
FIXED_IP_IPV6 = u'2001:0db8:0000:0000:0000:0000:0000:000a'
# Subnet prefix is purposefully not 32, because that coincidentally
# matches the result of any arbitrary IPv4->prefixlen conversion
SUBNET_CIDR_IPV6 = u'2001:db8::/70'
GATEWAY_IPV6 = u'2001:0db8:0000:0000:0000:0000:0000:0001'
ip = ipaddress.ip_address(FIXED_IP)
network = ipaddress.ip_network(SUBNET_CIDR)
@ -139,23 +140,28 @@ class TestOSUtils(base.TestCase):
self.ubuntu_os_util.write_vip_interface_file(
interface=netns_interface,
vip=FIXED_IP,
ip_version=ip.version,
prefixlen=network.prefixlen,
gateway=GATEWAY,
vips={
'address': FIXED_IP,
'ip_version': ip.version,
'prefixlen': network.prefixlen,
'gateway': GATEWAY,
'host_routes': host_routes
},
mtu=MTU,
vrrp_ip=None,
host_routes=host_routes)
vrrp_info=None
)
mock_vip_interface_file.assert_called_once_with(
name=netns_interface,
vip=FIXED_IP,
ip_version=ip.version,
prefixlen=network.prefixlen,
gateway=GATEWAY,
vips={
'address': FIXED_IP,
'ip_version': ip.version,
'prefixlen': network.prefixlen,
'gateway': GATEWAY,
'host_routes': host_routes
},
mtu=MTU,
vrrp_ip=None,
host_routes=host_routes,
vrrp_info=None,
fixed_ips=None,
topology="SINGLE")
mock_vip_interface_file.return_value.write.assert_called_once()
@ -165,23 +171,27 @@ class TestOSUtils(base.TestCase):
self.ubuntu_os_util.write_vip_interface_file(
interface=netns_interface,
vip=FIXED_IP_IPV6,
ip_version=ipv6.version,
prefixlen=networkv6.prefixlen,
gateway=GATEWAY,
vips={
'address': FIXED_IP_IPV6,
'ip_version': ipv6.version,
'prefixlen': networkv6.prefixlen,
'gateway': GATEWAY_IPV6,
'host_routes': host_routes
},
mtu=MTU,
vrrp_ip=None,
host_routes=host_routes)
vrrp_info=None)
mock_vip_interface_file.assert_called_once_with(
name=netns_interface,
vip=FIXED_IP_IPV6,
ip_version=ipv6.version,
prefixlen=networkv6.prefixlen,
gateway=GATEWAY,
vips={
'address': FIXED_IP_IPV6,
'ip_version': ipv6.version,
'prefixlen': networkv6.prefixlen,
'gateway': GATEWAY_IPV6,
'host_routes': host_routes
},
mtu=MTU,
vrrp_ip=None,
host_routes=host_routes,
vrrp_info=None,
fixed_ips=None,
topology="SINGLE")

View File

@ -12,7 +12,6 @@
# License for the specific language governing permissions and limitations
# under the License.
import os
import subprocess
from unittest import mock
from oslo_config import cfg
@ -114,20 +113,9 @@ class TestPlug(base.TestCase):
)
mock_webob.Response.assert_any_call(json={
'message': 'OK',
'details': 'VIP {vip} plugged on interface {interface}'.format(
vip=FAKE_IP_IPV4, interface='eth1')
'details': 'VIPs plugged on interface {interface}: {vips}'.format(
vips=FAKE_IP_IPV4, interface='eth1')
}, status=202)
calls = [mock.call('amphora-haproxy', ['/sbin/sysctl', '--system'],
stdout=subprocess.PIPE),
mock.call('amphora-haproxy', ['modprobe', 'ip_vs'],
stdout=subprocess.PIPE),
mock.call('amphora-haproxy',
['/sbin/sysctl', '-w', 'net.ipv4.ip_forward=1'],
stdout=subprocess.PIPE),
mock.call('amphora-haproxy',
['/sbin/sysctl', '-w', 'net.ipv4.vs.conntrack=1'],
stdout=subprocess.PIPE)]
mock_nspopen.assert_has_calls(calls, any_order=True)
@mock.patch('octavia.amphorae.backends.agent.api_server.plug.Plug.'
'_interface_by_mac', return_value=FAKE_INTERFACE)
@ -156,21 +144,46 @@ class TestPlug(base.TestCase):
)
mock_webob.Response.assert_any_call(json={
'message': 'OK',
'details': 'VIP {vip} plugged on interface {interface}'.format(
vip=FAKE_IP_IPV6_EXPANDED, interface='eth1')
'details': 'VIPs plugged on interface {interface}: {vips}'.format(
vips=FAKE_IP_IPV6_EXPANDED, interface='eth1')
}, status=202)
@mock.patch('octavia.amphorae.backends.agent.api_server.plug.Plug.'
'_interface_by_mac', return_value=FAKE_INTERFACE)
@mock.patch('pyroute2.NSPopen', create=True)
@mock.patch.object(plug, "webob")
@mock.patch('pyroute2.IPRoute', create=True)
@mock.patch('pyroute2.netns.create', create=True)
@mock.patch('pyroute2.NetNS', create=True)
@mock.patch('subprocess.check_output')
@mock.patch('shutil.copytree')
@mock.patch('os.makedirs')
def test_plug_vip_ipv4_and_ipv6(
self, mock_makedirs, mock_copytree,
mock_check_output, mock_netns, mock_netns_create,
mock_pyroute2, mock_webob, mock_nspopen, mock_by_mac):
conf = self.useFixture(oslo_fixture.Config(cfg.CONF))
conf.config(group='controller_worker',
loadbalancer_topology=constants.TOPOLOGY_ACTIVE_STANDBY)
additional_vips = [
{'ip_address': FAKE_IP_IPV4, 'subnet_cidr': FAKE_CIDR_IPV4,
'host_routes': [], 'gateway': FAKE_GATEWAY_IPV4}
]
m = mock.mock_open()
with mock.patch('os.open'), mock.patch.object(os, 'fdopen', m):
self.test_plug.plug_vip(
vip=FAKE_IP_IPV6,
subnet_cidr=FAKE_CIDR_IPV6,
gateway=FAKE_GATEWAY_IPV6,
mac_address=FAKE_MAC_ADDRESS,
additional_vips=additional_vips
)
mock_webob.Response.assert_any_call(json={
'message': 'OK',
'details': 'VIPs plugged on interface {interface}: {vips}'.format(
vips=", ".join([FAKE_IP_IPV6_EXPANDED, FAKE_IP_IPV4]),
interface='eth1')
}, status=202)
calls = [mock.call('amphora-haproxy', ['/sbin/sysctl', '--system'],
stdout=subprocess.PIPE),
mock.call('amphora-haproxy', ['modprobe', 'ip_vs'],
stdout=subprocess.PIPE),
mock.call('amphora-haproxy',
['/sbin/sysctl', '-w',
'net.ipv6.conf.all.forwarding=1'],
stdout=subprocess.PIPE),
mock.call('amphora-haproxy',
['/sbin/sysctl', '-w', 'net.ipv4.vs.conntrack=1'],
stdout=subprocess.PIPE)]
mock_nspopen.assert_has_calls(calls, any_order=True)
@mock.patch.object(plug, "webob")
@mock.patch('pyroute2.IPRoute', create=True)
@ -183,15 +196,19 @@ class TestPlug(base.TestCase):
mock_check_output, mock_netns, mock_netns_create,
mock_pyroute2, mock_webob):
m = mock.mock_open()
BAD_IP_ADDRESS = "error"
with mock.patch('os.open'), mock.patch.object(os, 'fdopen', m):
self.test_plug.plug_vip(
vip="error",
vip=BAD_IP_ADDRESS,
subnet_cidr=FAKE_CIDR_IPV4,
gateway=FAKE_GATEWAY_IPV4,
mac_address=FAKE_MAC_ADDRESS
)
mock_webob.Response.assert_any_call(json={'message': 'Invalid VIP'},
status=400)
mock_webob.Response.assert_any_call(
json={'message': ("Invalid VIP: '{ip}' does not appear to be an "
"IPv4 or IPv6 address").format(
ip=BAD_IP_ADDRESS)},
status=400)
@mock.patch("octavia.amphorae.backends.agent.api_server.osutils."
"BaseOS.write_interface_file")
@ -342,12 +359,20 @@ class TestPlug(base.TestCase):
mock_write_vip_interface.assert_called_once_with(
interface=FAKE_INTERFACE,
vip=vip_net_info['vip'],
ip_version=4,
prefixlen=16,
gateway=vip_net_info['gateway'],
vrrp_ip=vip_net_info['vrrp_ip'],
host_routes=[],
vips=[{
'ip_address': vip_net_info['vip'],
'ip_version': 4,
'prefixlen': 16,
'gateway': vip_net_info['gateway'],
'host_routes': [],
}],
vrrp_info={
'ip': vip_net_info['vrrp_ip'],
'ip_version': 4,
'prefixlen': 16,
'gateway': vip_net_info['gateway'],
'host_routes': [],
},
fixed_ips=fixed_ips, mtu=mtu)
mock_if_up.assert_called_once_with(FAKE_INTERFACE, 'vip')

View File

@ -39,14 +39,20 @@ class TestInterfaceFile(base.TestCase):
vip_interface_file = interface_file.VIPInterfaceFile(
name=netns_interface,
mtu=MTU,
vip=VIP_ADDRESS,
ip_version=cidr.version,
prefixlen=prefixlen,
gateway=GATEWAY,
vrrp_ip=VRRP_IP_ADDRESS,
host_routes=[
{'destination': DEST1, 'nexthop': NEXTHOP}
],
vips=[{
'ip_address': VIP_ADDRESS,
'ip_version': cidr.version,
'prefixlen': prefixlen,
'gateway': GATEWAY,
'host_routes': [
{'destination': DEST1, 'nexthop': NEXTHOP}
],
}],
vrrp_info={
'ip': VRRP_IP_ADDRESS,
'prefixlen': prefixlen
},
fixed_ips=[],
topology=TOPOLOGY)
expected_dict = {
@ -148,14 +154,19 @@ class TestInterfaceFile(base.TestCase):
vip_interface_file = interface_file.VIPInterfaceFile(
name=netns_interface,
mtu=MTU,
vip=VIP_ADDRESS,
ip_version=cidr.version,
prefixlen=prefixlen,
gateway=GATEWAY,
vrrp_ip=VRRP_IP_ADDRESS,
host_routes=[
{'destination': DEST1, 'nexthop': NEXTHOP}
],
vips=[{
'ip_address': VIP_ADDRESS,
'ip_version': cidr.version,
'prefixlen': prefixlen,
'gateway': GATEWAY,
'host_routes': [
{'destination': DEST1, 'nexthop': NEXTHOP}
],
}],
vrrp_info={
'ip': VRRP_IP_ADDRESS,
'prefixlen': prefixlen,
},
fixed_ips=[{'ip_address': FIXED_IP,
'subnet_cidr': SUBNET2_CIDR,
'host_routes': [
@ -260,12 +271,15 @@ class TestInterfaceFile(base.TestCase):
vip_interface_file = interface_file.VIPInterfaceFile(
name=netns_interface,
mtu=MTU,
vip=VIP_ADDRESS,
ip_version=cidr.version,
prefixlen=prefixlen,
gateway=None,
vrrp_ip=None,
host_routes=[],
vips=[{
'ip_address': VIP_ADDRESS,
'ip_version': cidr.version,
'prefixlen': prefixlen,
'gateway': None,
'host_routes': [],
}],
vrrp_info=None,
fixed_ips=[],
topology=TOPOLOGY)
expected_dict = {
@ -334,12 +348,18 @@ class TestInterfaceFile(base.TestCase):
vip_interface_file = interface_file.VIPInterfaceFile(
name=netns_interface,
mtu=MTU,
vip=VIP_ADDRESS,
ip_version=cidr.version,
prefixlen=prefixlen,
gateway=GATEWAY,
vrrp_ip=VRRP_IP_ADDRESS,
host_routes=[],
vips=[{
'ip_address': VIP_ADDRESS,
'ip_version': cidr.version,
'prefixlen': prefixlen,
'gateway': GATEWAY,
'host_routes': [],
}],
vrrp_info={
'ip': VRRP_IP_ADDRESS,
'prefixlen': prefixlen
},
fixed_ips=[],
topology=TOPOLOGY)
expected_dict = {
@ -426,14 +446,20 @@ class TestInterfaceFile(base.TestCase):
vip_interface_file = interface_file.VIPInterfaceFile(
name=netns_interface,
mtu=MTU,
vip=VIP_ADDRESS,
ip_version=cidr.version,
prefixlen=prefixlen,
gateway=GATEWAY,
vrrp_ip=VRRP_IP_ADDRESS,
host_routes=[
{'destination': DEST1, 'nexthop': NEXTHOP}
],
vips=[{
'ip_address': VIP_ADDRESS,
'ip_version': cidr.version,
'prefixlen': prefixlen,
'gateway': GATEWAY,
'host_routes': [
{'destination': DEST1, 'nexthop': NEXTHOP}
],
}],
vrrp_info={
'ip': VRRP_IP_ADDRESS,
'prefixlen': prefixlen,
},
fixed_ips=[],
topology=TOPOLOGY)
expected_dict = {

View File

@ -55,7 +55,10 @@ KERNAL_FILE_SAMPLE_V6 = (
CFG_FILE_TEMPLATE_v4 = (
"# Configuration for Listener %(listener_id)s\n\n"
"net_namespace %(ns_name)s\n\n"
"virtual_server 10.0.0.37 7777 {\n"
"virtual_server_group ipv4-group {\n"
" 10.0.0.37 7777\n"
"}\n\n"
"virtual_server group ipv4-group {\n"
" lb_algo rr\n"
" lb_kind NAT\n"
" protocol udp\n\n\n"
@ -96,7 +99,10 @@ CFG_FILE_TEMPLATE_v4 = (
CFG_FILE_TEMPLATE_v6 = (
"# Configuration for Listener %(listener_id)s\n\n"
"net_namespace %(ns_name)s\n\n"
"virtual_server fd79:35e2:9963:0:f816:3eff:fe6d:7a2a 7777 {\n"
"virtual_server_group ipv6-group {\n"
" fd79:35e2:9963:0:f816:3eff:fe6d:7a2a 7777\n"
"}\n\n"
"virtual_server group ipv6-group {\n"
" lb_algo rr\n"
" lb_kind NAT\n"
" protocol udp\n\n\n"
@ -136,6 +142,54 @@ CFG_FILE_TEMPLATE_v6 = (
" }\n\n"
"}")
CFG_FILE_TEMPLATE_mixed = (
"# Configuration for Listener %(listener_id)s\n\n"
"net_namespace %(ns_name)s\n\n"
"virtual_server_group ipv4-group {\n"
" 10.0.0.37 7777\n"
"}\n\n"
"virtual_server group ipv4-group {\n"
" lb_algo rr\n"
" lb_kind NAT\n"
" protocol udp\n\n\n"
" # Configuration for Pool %(pool_id)s\n"
" # Configuration for Member %(member_id1)s\n"
" real_server 10.0.0.25 2222 {\n"
" weight 3\n"
" MISC_CHECK {\n\n"
" misc_path \"/usr/bin/check_script.sh\"\n\n"
" misc_timeout 5\n\n"
" }\n\n"
" }\n\n"
" # Member %(member_id2)s is disabled\n\n"
"}\n"
"virtual_server_group ipv6-group {\n"
" fd79:35e2:9963:0:f816:3eff:fe6d:7a2a 7777\n"
"}\n\n"
"virtual_server group ipv6-group {\n"
" lb_algo rr\n"
" lb_kind NAT\n"
" protocol udp\n\n\n"
" # Configuration for Pool %(pool_id)s\n"
" # Configuration for Member %(member_id3)s\n"
" real_server fd79:35e2:9963:0:f816:3eff:feca:b7bf 2222 {\n"
" weight 3\n"
" MISC_CHECK {\n\n"
" misc_path \"/usr/bin/check_script.sh\"\n\n"
" misc_timeout 5\n\n"
" }\n\n"
" }\n\n"
" # Configuration for Member %(member_id4)s\n"
" real_server fd79:35e2:9963:0:f816:3eff:fe9d:94df 3333 {\n"
" weight 2\n"
" MISC_CHECK {\n\n"
" misc_path \"/usr/bin/check_script.sh\"\n\n"
" misc_timeout 5\n\n"
" }\n\n"
" }\n\n"
" # Member %(member_id5)s is disabled\n\n"
"}")
CFG_FILE_TEMPLATE_DISABLED_LISTENER = (
"# Listener %(listener_id)s is disabled \n\n"
"net_namespace %(ns_name)s\n\n"
@ -178,6 +232,8 @@ class LvsQueryTestCase(base.TestCase):
self.member_id3_v6 = uuidutils.generate_uuid()
self.member_id4_v6 = uuidutils.generate_uuid()
self.member_id5_v6 = uuidutils.generate_uuid()
self.listener_id_mixed = uuidutils.generate_uuid()
self.pool_id_mixed = uuidutils.generate_uuid()
self.disabled_listener_id = uuidutils.generate_uuid()
cfg_content_v4 = CFG_FILE_TEMPLATE_v4 % {
'listener_id': self.listener_id_v4,
@ -198,6 +254,16 @@ class LvsQueryTestCase(base.TestCase):
'member_id4': self.member_id4_v6,
'member_id5': self.member_id5_v6
}
cfg_content_mixed = CFG_FILE_TEMPLATE_mixed % {
'listener_id': self.listener_id_mixed,
'ns_name': constants.AMPHORA_NAMESPACE,
'pool_id': self.pool_id_mixed,
'member_id1': self.member_id1_v4,
'member_id2': self.member_id2_v4,
'member_id3': self.member_id3_v6,
'member_id4': self.member_id4_v6,
'member_id5': self.member_id5_v6
}
cfg_content_disabled_listener = (
CFG_FILE_TEMPLATE_DISABLED_LISTENER % {
'listener_id': self.listener_id_v6,
@ -208,6 +274,9 @@ class LvsQueryTestCase(base.TestCase):
util.keepalived_lvs_cfg_path(self.listener_id_v4), cfg_content_v4))
self.useFixture(test_utils.OpenFixture(
util.keepalived_lvs_cfg_path(self.listener_id_v6), cfg_content_v6))
self.useFixture(test_utils.OpenFixture(
util.keepalived_lvs_cfg_path(self.listener_id_mixed),
cfg_content_mixed))
self.useFixture(test_utils.OpenFixture(
util.keepalived_lvs_cfg_path(self.disabled_listener_id),
cfg_content_disabled_listener))
@ -215,7 +284,7 @@ class LvsQueryTestCase(base.TestCase):
@mock.patch('subprocess.check_output')
def test_get_listener_realserver_mapping(self, mock_check_output):
# Ipv4 resolver
input_listener_ip_port = '10.0.0.37:7777'
input_listener_ip_port = ['10.0.0.37:7777']
target_ns = constants.AMPHORA_NAMESPACE
mock_check_output.return_value = KERNAL_FILE_SAMPLE_V4
result = lvs_query.get_listener_realserver_mapping(
@ -234,7 +303,8 @@ class LvsQueryTestCase(base.TestCase):
self.assertEqual((True, expected), result)
# Ipv6 resolver
input_listener_ip_port = '[fd79:35e2:9963:0:f816:3eff:fe6d:7a2a]:7777'
input_listener_ip_port = [
'[fd79:35e2:9963:0:f816:3eff:fe6d:7a2a]:7777']
mock_check_output.return_value = KERNAL_FILE_SAMPLE_V6
result = lvs_query.get_listener_realserver_mapping(
target_ns, input_listener_ip_port,
@ -263,7 +333,7 @@ class LvsQueryTestCase(base.TestCase):
mock_check_output.return_value = KERNAL_FILE_SAMPLE_V4
for listener_ip_port in ['10.0.0.37:7776', '10.0.0.31:7777']:
result = lvs_query.get_listener_realserver_mapping(
target_ns, listener_ip_port,
target_ns, [listener_ip_port],
health_monitor_enabled=True)
self.assertEqual((False, {}), result)
@ -272,7 +342,7 @@ class LvsQueryTestCase(base.TestCase):
'[fd79:35e2:9963:0:f816:3eff:fe6d:7a2a]:7776',
'[fd79:35e2:9973:0:f816:3eff:fe6d:7a2a]:7777']:
result = lvs_query.get_listener_realserver_mapping(
target_ns, listener_ip_port,
target_ns, [listener_ip_port],
health_monitor_enabled=True)
self.assertEqual((False, {}), result)
@ -281,7 +351,7 @@ class LvsQueryTestCase(base.TestCase):
res = lvs_query.get_lvs_listener_resource_ipports_nsname(
self.listener_id_v4)
expected = {'Listener': {'id': self.listener_id_v4,
'ipport': '10.0.0.37:7777'},
'ipports': ['10.0.0.37:7777']},
'Pool': {'id': self.pool_id_v4},
'Members': [{'id': self.member_id1_v4,
'ipport': '10.0.0.25:2222'},
@ -298,7 +368,7 @@ class LvsQueryTestCase(base.TestCase):
self.listener_id_v6)
expected = {'Listener': {
'id': self.listener_id_v6,
'ipport': '[fd79:35e2:9963:0:f816:3eff:fe6d:7a2a]:7777'},
'ipports': ['[fd79:35e2:9963:0:f816:3eff:fe6d:7a2a]:7777']},
'Pool': {'id': self.pool_id_v6},
'Members': [
{'id': self.member_id1_v6,
@ -318,6 +388,28 @@ class LvsQueryTestCase(base.TestCase):
self.disabled_listener_id)
self.assertEqual((None, constants.AMPHORA_NAMESPACE), res)
# multi-vip/mixed
res = lvs_query.get_lvs_listener_resource_ipports_nsname(
self.listener_id_mixed)
expected = {'Listener': {
'id': self.listener_id_mixed,
'ipports': [
'10.0.0.37:7777',
'[fd79:35e2:9963:0:f816:3eff:fe6d:7a2a]:7777']},
'Pool': {'id': self.pool_id_mixed},
'Members': [
{'id': self.member_id1_v4,
'ipport': '10.0.0.25:2222'},
{'id': self.member_id3_v6,
'ipport': '[fd79:35e2:9963:0:f816:3eff:feca:b7bf]:2222'},
{'id': self.member_id4_v6,
'ipport': '[fd79:35e2:9963:0:f816:3eff:fe9d:94df]:3333'},
{'id': self.member_id2_v4,
'ipport': None},
{'id': self.member_id5_v6,
'ipport': None}]}
self.assertEqual((expected, constants.AMPHORA_NAMESPACE), res)
@mock.patch('os.stat')
@mock.patch('subprocess.check_output')
def test_get_lvs_listener_pool_status(self, mock_check_output,
@ -393,7 +485,7 @@ class LvsQueryTestCase(base.TestCase):
{
'Listener': {
'id': self.listener_id_v4,
'ipport': '10.0.0.37:7777'}},
'ipports': ['10.0.0.37:7777']}},
constants.AMPHORA_NAMESPACE)
res = lvs_query.get_lvs_listener_pool_status(self.listener_id_v4)
self.assertEqual({}, res)
@ -410,7 +502,7 @@ class LvsQueryTestCase(base.TestCase):
mock_get_resource_ipports.return_value = (
{
'Listener': {'id': self.listener_id_v4,
'ipport': '10.0.0.37:7777'},
'ipports': ['10.0.0.37:7777']},
'Pool': {'id': self.pool_id_v4}},
constants.AMPHORA_NAMESPACE)
res = lvs_query.get_lvs_listener_pool_status(self.listener_id_v4)

View File

@ -111,7 +111,8 @@ class TestHaproxyAmphoraLoadBalancerDriverTest(base.TestCase):
'mac_address': FAKE_MAC_ADDRESS,
'vrrp_ip': self.amp.vrrp_ip,
'mtu': FAKE_MTU,
'host_routes': host_routes_data}
'host_routes': host_routes_data,
'additional_vips': []}
self.timeout_dict = {constants.REQ_CONN_TIMEOUT: 1,
constants.REQ_READ_TIMEOUT: 2,

View File

@ -111,7 +111,8 @@ class TestHaproxyAmphoraLoadBalancerDriverTest(base.TestCase):
'mac_address': FAKE_MAC_ADDRESS,
'vrrp_ip': self.amp.vrrp_ip,
'mtu': FAKE_MTU,
'host_routes': host_routes_data}
'host_routes': host_routes_data,
'additional_vips': []}
self.timeout_dict = {constants.REQ_CONN_TIMEOUT: 1,
constants.REQ_READ_TIMEOUT: 2,
@ -747,6 +748,7 @@ class TestHaproxyAmphoraLoadBalancerDriverTest(base.TestCase):
gateway=None,
vrrp_ip=self.amp.vrrp_ip,
host_routes=[],
additional_vips=[],
mtu=FAKE_MTU
)))

View File

@ -20,6 +20,7 @@ from oslo_config import fixture as oslo_fixture
from octavia.amphorae.drivers.keepalived.jinja import jinja_cfg
from octavia.common import constants
from octavia.network import data_models as n_data_models
import octavia.tests.unit.base as base
@ -61,47 +62,50 @@ class TestVRRPRestDriver(base.TestCase):
self.lb.vip.ip_address = '10.1.0.5'
self.lb.vrrp_group.advert_int = 10
self.ref_conf = ("vrrp_script check_script {\n"
" script /tmp/test/vrrp/check_script.sh\n"
" interval 5\n"
" fall 2\n"
" rise 2\n"
"}\n"
"\n"
"vrrp_instance TESTGROUP {\n"
" state MASTER\n"
" interface eth1\n"
" virtual_router_id 1\n"
" priority 100\n"
" nopreempt\n"
" accept\n"
" garp_master_refresh 5\n"
" garp_master_refresh_repeat 2\n"
" advert_int 10\n"
" authentication {\n"
" auth_type PASS\n"
" auth_pass TESTPASSWORD\n"
" }\n"
"\n"
" unicast_src_ip 10.0.0.1\n"
" unicast_peer {\n"
" 10.0.0.2\n"
" }\n"
"\n"
" virtual_ipaddress {\n"
" 10.1.0.5\n"
" }\n\n"
" virtual_routes {\n"
" 10.1.0.0/24 dev eth1 src 10.1.0.5 scope link "
"table 1\n"
" }\n\n"
" virtual_rules {\n"
" from 10.1.0.5/32 table 1 priority 100\n"
" }\n\n"
" track_script {\n"
" check_script\n"
" }\n"
"}")
self.ref_conf = (
"vrrp_script check_script {\n"
" script /tmp/test/vrrp/check_script.sh\n"
" interval 5\n"
" fall 2\n"
" rise 2\n"
"}\n"
"\n"
"vrrp_instance TESTGROUP {\n"
" state MASTER\n"
" interface eth1\n"
" virtual_router_id 1\n"
" priority 100\n"
" nopreempt\n"
" accept\n"
" garp_master_refresh 5\n"
" garp_master_refresh_repeat 2\n"
" advert_int 10\n"
" authentication {\n"
" auth_type PASS\n"
" auth_pass TESTPASSWORD\n"
" }\n"
"\n"
" unicast_src_ip 10.0.0.1\n"
" unicast_peer {\n"
" 10.0.0.2\n"
" }\n"
"\n"
" virtual_ipaddress {\n"
" 10.1.0.5\n"
" }\n\n"
" virtual_ipaddress_excluded {\n"
" }\n\n"
" virtual_routes {\n"
" 10.1.0.0/24 dev eth1 src 10.1.0.5 scope link table 1\n"
" default via 10.1.0.1 dev eth1 onlink table 1\n"
" }\n\n"
" virtual_rules {\n"
" from 10.1.0.5/32 table 1 priority 100\n"
" }\n\n"
" track_script {\n"
" check_script\n"
" }\n"
"}")
self.amphora1v6 = copy.deepcopy(self.amphora1)
self.amphora1v6.vrrp_ip = '2001:db8::10'
@ -111,55 +115,208 @@ class TestVRRPRestDriver(base.TestCase):
self.lbv6.amphorae = [self.amphora1v6, self.amphora2v6]
self.lbv6.vip.ip_address = '2001:db8::15'
self.ref_v6_conf = ("vrrp_script check_script {\n"
" script /tmp/test/vrrp/check_script.sh\n"
" interval 5\n"
" fall 2\n"
" rise 2\n"
"}\n"
"\n"
"vrrp_instance TESTGROUP {\n"
" state MASTER\n"
" interface eth1\n"
" virtual_router_id 1\n"
" priority 100\n"
" nopreempt\n"
" accept\n"
" garp_master_refresh 5\n"
" garp_master_refresh_repeat 2\n"
" advert_int 10\n"
" authentication {\n"
" auth_type PASS\n"
" auth_pass TESTPASSWORD\n"
" }\n"
"\n"
" unicast_src_ip 2001:db8::10\n"
" unicast_peer {\n"
" 2001:db8::11\n"
" }\n"
"\n"
" virtual_ipaddress {\n"
" 2001:db8::15\n"
" }\n\n"
" virtual_routes {\n"
" 2001:db8::/64 dev eth1 src "
"2001:db8::15 scope link table 1\n"
" }\n\n"
" virtual_rules {\n"
" from 2001:db8::15/128 table 1 "
"priority 100\n"
" }\n\n"
" track_script {\n"
" check_script\n"
" }\n"
"}")
self.ref_v6_conf = (
"vrrp_script check_script {\n"
" script /tmp/test/vrrp/check_script.sh\n"
" interval 5\n"
" fall 2\n"
" rise 2\n"
"}\n"
"\n"
"vrrp_instance TESTGROUP {\n"
" state MASTER\n"
" interface eth1\n"
" virtual_router_id 1\n"
" priority 100\n"
" nopreempt\n"
" accept\n"
" garp_master_refresh 5\n"
" garp_master_refresh_repeat 2\n"
" advert_int 10\n"
" authentication {\n"
" auth_type PASS\n"
" auth_pass TESTPASSWORD\n"
" }\n"
"\n"
" unicast_src_ip 2001:db8::10\n"
" unicast_peer {\n"
" 2001:db8::11\n"
" }\n"
"\n"
" virtual_ipaddress {\n"
" 2001:db8::15\n"
" }\n\n"
" virtual_ipaddress_excluded {\n"
" }\n\n"
" virtual_routes {\n"
" 2001:db8::/64 dev eth1 src "
"2001:db8::15 scope link table 1\n"
" default via 2001:db8::ff dev eth1 onlink table 1\n"
" }\n\n"
" virtual_rules {\n"
" from 2001:db8::15/128 table 1 priority 100\n"
" }\n\n"
" track_script {\n"
" check_script\n"
" }\n"
"}")
self.ref_v4_v6_conf = (
"vrrp_script check_script {\n"
" script /tmp/test/vrrp/check_script.sh\n"
" interval 5\n"
" fall 2\n"
" rise 2\n"
"}\n"
"\n"
"vrrp_instance TESTGROUP {\n"
" state MASTER\n"
" interface eth1\n"
" virtual_router_id 1\n"
" priority 100\n"
" nopreempt\n"
" accept\n"
" garp_master_refresh 5\n"
" garp_master_refresh_repeat 2\n"
" advert_int 10\n"
" authentication {\n"
" auth_type PASS\n"
" auth_pass TESTPASSWORD\n"
" }\n"
"\n"
" unicast_src_ip 10.0.0.1\n"
" unicast_peer {\n"
" 10.0.0.2\n"
" }\n"
"\n"
" virtual_ipaddress {\n"
" 10.1.0.5\n"
" }\n\n"
" virtual_ipaddress_excluded {\n"
" 2001:db8::15\n"
" }\n\n"
" virtual_routes {\n"
" 10.1.0.0/24 dev eth1 src 10.1.0.5 scope link table 1\n"
" default via 10.1.0.1 dev eth1 onlink table 1\n"
" 2001:db8::/64 dev eth1 src "
"2001:db8::15 scope link table 2\n"
" default via 2001:db8::ff dev eth1 onlink table 2\n"
" }\n\n"
" virtual_rules {\n"
" from 10.1.0.5/32 table 1 priority 100\n"
" from 2001:db8::15/128 table 2 priority 100\n"
" }\n\n"
" track_script {\n"
" check_script\n"
" }\n"
"}")
self.ref_v6_v4_conf = (
"vrrp_script check_script {\n"
" script /tmp/test/vrrp/check_script.sh\n"
" interval 5\n"
" fall 2\n"
" rise 2\n"
"}\n"
"\n"
"vrrp_instance TESTGROUP {\n"
" state MASTER\n"
" interface eth1\n"
" virtual_router_id 1\n"
" priority 100\n"
" nopreempt\n"
" accept\n"
" garp_master_refresh 5\n"
" garp_master_refresh_repeat 2\n"
" advert_int 10\n"
" authentication {\n"
" auth_type PASS\n"
" auth_pass TESTPASSWORD\n"
" }\n"
"\n"
" unicast_src_ip 2001:db8::10\n"
" unicast_peer {\n"
" 2001:db8::11\n"
" }\n"
"\n"
" virtual_ipaddress {\n"
" 2001:db8::15\n"
" }\n\n"
" virtual_ipaddress_excluded {\n"
" 10.1.0.5\n"
" }\n\n"
" virtual_routes {\n"
" 2001:db8::/64 dev eth1 src "
"2001:db8::15 scope link table 1\n"
" default via 2001:db8::ff dev eth1 onlink table 1\n"
" 10.1.0.0/24 dev eth1 src 10.1.0.5 scope link table 2\n"
" default via 10.1.0.1 dev eth1 onlink table 2\n"
" }\n\n"
" virtual_rules {\n"
" from 2001:db8::15/128 table 1 priority 100\n"
" from 10.1.0.5/32 table 2 priority 100\n"
" }\n\n"
" track_script {\n"
" check_script\n"
" }\n"
"}")
def test_build_keepalived_config(self):
mock_subnet = n_data_models.Subnet()
mock_subnet.cidr = '10.1.0.0/24'
mock_subnet.gateway_ip = '10.1.0.1'
mock_subnet.host_routes = []
amp_net_config = n_data_models.AmphoraNetworkConfig(
vip_subnet=mock_subnet)
config = self.templater.build_keepalived_config(
self.lb, self.amphora1, '10.1.0.0/24')
self.lb, self.amphora1, amp_net_config)
self.assertEqual(self.ref_conf, config)
def test_build_keepalived_ipv6_config(self):
mock_subnet = n_data_models.Subnet()
mock_subnet.cidr = '2001:db8::/64'
mock_subnet.gateway_ip = '2001:db8::ff'
mock_subnet.host_routes = []
amp_net_config = n_data_models.AmphoraNetworkConfig(
vip_subnet=mock_subnet)
config = self.templater.build_keepalived_config(
self.lbv6, self.amphora1v6, '2001:db8::/64')
self.lbv6, self.amphora1v6, amp_net_config)
self.assertEqual(self.ref_v6_conf, config)
def test_build_keepalived_config_with_additional_vips(self):
mock_subnet1 = n_data_models.Subnet()
mock_subnet1.cidr = '10.1.0.0/24'
mock_subnet1.gateway_ip = '10.1.0.1'
mock_subnet1.host_routes = []
mock_subnet2 = n_data_models.Subnet()
mock_subnet2.cidr = '2001:db8::/64'
mock_subnet2.gateway_ip = '2001:db8::ff'
mock_subnet2.host_routes = []
# Use IPv4 as the primary VIP, IPv6 as secondary
additional_vip = n_data_models.AdditionalVipData(
ip_address=self.lbv6.vip.ip_address,
subnet=mock_subnet2
)
amp_net_config = n_data_models.AmphoraNetworkConfig(
vip_subnet=mock_subnet1,
additional_vip_data=[additional_vip])
config = self.templater.build_keepalived_config(
self.lb, self.amphora1, amp_net_config)
self.assertEqual(self.ref_v4_v6_conf, config)
# Use IPv6 as the primary VIP, IPv4 as secondary
additional_vip = n_data_models.AdditionalVipData(
ip_address=self.lb.vip.ip_address,
subnet=mock_subnet1
)
amp_net_config = n_data_models.AmphoraNetworkConfig(
vip_subnet=mock_subnet2,
additional_vip_data=[additional_vip])
config = self.templater.build_keepalived_config(
self.lbv6, self.amphora1v6, amp_net_config)
self.assertEqual(self.ref_v6_v4_conf, config)

View File

@ -18,6 +18,7 @@ from oslo_utils import uuidutils
from octavia.amphorae.drivers.keepalived import vrrp_rest_driver
from octavia.common import constants
from octavia.network import data_models as n_data_models
import octavia.tests.unit.base as base
# Version 1.0 is functionally identical to all versions before it
@ -42,8 +43,11 @@ class TestVRRPRestDriver(base.TestCase):
self.lb_mock.amphorae = [self.amphora_mock]
self.amphorae_network_config = {}
vip_subnet = mock.MagicMock()
vip_subnet.cidr = '192.0.2.0/24'
self.amphorae_network_config[self.amphora_mock.id] = vip_subnet
self.vip_cidr = vip_subnet.cidr = '192.0.2.0/24'
one_amp_net_config = n_data_models.AmphoraNetworkConfig(
vip_subnet=vip_subnet
)
self.amphorae_network_config[self.amphora_mock.id] = one_amp_net_config
super().setUp()
@ -56,6 +60,9 @@ class TestVRRPRestDriver(base.TestCase):
self.keepalived_mixin.update_vrrp_conf(
self.lb_mock, self.amphorae_network_config, self.amphora_mock)
mock_templater.assert_called_with(
self.lb_mock, self.amphora_mock,
self.amphorae_network_config[self.amphora_mock.id])
self.clients[API_VERSION].upload_vrrp_config.assert_called_once_with(
self.amphora_mock,
self.FAKE_CONFIG)

View File

@ -34,13 +34,15 @@ class TestAmphoraDriver(base.TestRpc):
def test_create_vip_port(self, mock_get_net_driver):
mock_net_driver = mock.MagicMock()
mock_get_net_driver.return_value = mock_net_driver
mock_net_driver.allocate_vip.return_value = self.sample_data.db_vip
mock_net_driver.allocate_vip.return_value = self.sample_data.db_vip, []
provider_vip_dict = self.amp_driver.create_vip_port(
provider_vip_dict, add_vip_dicts = self.amp_driver.create_vip_port(
self.sample_data.lb_id, self.sample_data.project_id,
self.sample_data.provider_vip_dict)
self.sample_data.provider_vip_dict,
self.sample_data.provider_additional_vip_dicts)
self.assertEqual(self.sample_data.provider_vip_dict, provider_vip_dict)
self.assertFalse(add_vip_dicts)
@mock.patch('octavia.common.utils.get_network_driver')
def test_create_vip_port_without_port_security_enabled(
@ -55,7 +57,8 @@ class TestAmphoraDriver(base.TestRpc):
self.assertRaises(exceptions.DriverError,
self.amp_driver.create_vip_port,
self.sample_data.lb_id, self.sample_data.project_id,
self.sample_data.provider_vip_dict)
self.sample_data.provider_vip_dict,
self.sample_data.provider_additional_vip_dicts)
@mock.patch('octavia.common.utils.get_network_driver')
def test_create_vip_port_failed(self, mock_get_net_driver):
@ -67,7 +70,8 @@ class TestAmphoraDriver(base.TestRpc):
self.assertRaises(exceptions.DriverError,
self.amp_driver.create_vip_port,
self.sample_data.lb_id, self.sample_data.project_id,
self.sample_data.provider_vip_dict)
self.sample_data.provider_vip_dict,
self.sample_data.provider_additional_vip_dicts)
# Load Balancer
@mock.patch('oslo_messaging.RPCClient.cast')

View File

@ -15,6 +15,7 @@ from unittest import mock
from octavia_lib.api.drivers import data_models as driver_dm
from octavia_lib.api.drivers import exceptions
from octavia_lib.common import constants as lib_consts
from oslo_utils import uuidutils
from octavia.api.drivers.amphora_driver.v2 import driver
@ -22,7 +23,6 @@ from octavia.common import constants as consts
from octavia.network import base as network_base
from octavia.tests.common import sample_data_models
from octavia.tests.unit import base
from octavia_lib.common import constants as lib_consts
class TestAmphoraDriver(base.TestRpc):
@ -35,13 +35,15 @@ class TestAmphoraDriver(base.TestRpc):
def test_create_vip_port(self, mock_get_net_driver):
mock_net_driver = mock.MagicMock()
mock_get_net_driver.return_value = mock_net_driver
mock_net_driver.allocate_vip.return_value = self.sample_data.db_vip
mock_net_driver.allocate_vip.return_value = self.sample_data.db_vip, []
provider_vip_dict = self.amp_driver.create_vip_port(
provider_vip_dict, add_vip_dicts = self.amp_driver.create_vip_port(
self.sample_data.lb_id, self.sample_data.project_id,
self.sample_data.provider_vip_dict)
self.sample_data.provider_vip_dict,
self.sample_data.provider_additional_vip_dicts)
self.assertEqual(self.sample_data.provider_vip_dict, provider_vip_dict)
self.assertFalse(add_vip_dicts)
@mock.patch('octavia.common.utils.get_network_driver')
def test_create_vip_port_without_port_security_enabled(
@ -56,7 +58,8 @@ class TestAmphoraDriver(base.TestRpc):
self.assertRaises(exceptions.DriverError,
self.amp_driver.create_vip_port,
self.sample_data.lb_id, self.sample_data.project_id,
self.sample_data.provider_vip_dict)
self.sample_data.provider_vip_dict,
self.sample_data.provider_additional_vip_dicts)
@mock.patch('octavia.common.utils.get_network_driver')
def test_create_vip_port_failed(self, mock_get_net_driver):
@ -68,7 +71,8 @@ class TestAmphoraDriver(base.TestRpc):
self.assertRaises(exceptions.DriverError,
self.amp_driver.create_vip_port,
self.sample_data.lb_id, self.sample_data.project_id,
self.sample_data.provider_vip_dict)
self.sample_data.provider_vip_dict,
self.sample_data.provider_additional_vip_dicts)
# Load Balancer
@mock.patch('oslo_messaging.RPCClient.cast')

View File

@ -148,9 +148,11 @@ class TestNoopProviderDriver(base.TestCase):
"loadbalancer."}
def test_create_vip_port(self):
vip_dict = self.driver.create_vip_port(self.loadbalancer_id,
self.project_id,
self.ref_vip.to_dict())
vip_dict, additional_vip_dicts = self.driver.create_vip_port(
self.loadbalancer_id,
self.project_id,
self.ref_vip.to_dict(),
None)
self.assertEqual(self.ref_vip.to_dict(), vip_dict)

View File

@ -103,6 +103,11 @@ class TestLoadBalancerPOST(base.BaseTypesTest, TestLoadBalancer):
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_additional_vips(self):
body = {"additional_vips": [{"subnet_id": uuidutils.generate_uuid(),
"ip_address": "10.0.0.1"}]}
wsme_json.fromjson(self._type, body)
class TestLoadBalancerPUT(base.BaseTypesTest, TestLoadBalancer):

View File

@ -436,6 +436,24 @@ class TestHaproxyCfg(base.TestCase):
sample_configs_combined.sample_base_expected_config(backend=be),
rendered_obj)
def test_render_template_additional_vips(self):
fe = ("frontend sample_listener_id_1\n"
" maxconn {maxconn}\n"
" bind 10.0.0.2:80\n"
" bind 10.0.1.2:80\n"
" bind 2001:db8::2:80\n"
" mode http\n"
" default_backend sample_pool_id_1:sample_listener_id_1\n"
" timeout client 50000\n").format(
maxconn=constants.HAPROXY_DEFAULT_MAXCONN)
rendered_obj = self.jinja_cfg.render_loadbalancer_obj(
sample_configs_combined.sample_amphora_tuple(),
[sample_configs_combined.sample_listener_tuple(
additional_vips=True)])
self.assertEqual(
sample_configs_combined.sample_base_expected_config(frontend=fe),
rendered_obj)
def test_render_template_member_backup(self):
be = ("backend sample_pool_id_1:sample_listener_id_1\n"
" mode http\n"

View File

@ -156,6 +156,23 @@ class TestHaproxyCfg(base.TestCase):
sample_configs_split.sample_base_expected_config(backend=be),
rendered_obj)
def test_render_template_additional_vips(self):
fe = ("frontend sample_listener_id_1\n"
" maxconn {maxconn}\n"
" bind 10.0.0.2:80\n"
" bind 10.0.1.2:80\n"
" bind 2001:db8::2:80\n"
" mode http\n"
" default_backend sample_pool_id_1\n"
" timeout client 50000\n").format(
maxconn=constants.HAPROXY_MAX_MAXCONN)
rendered_obj = self.jinja_cfg.render_loadbalancer_obj(
sample_configs_split.sample_amphora_tuple(),
sample_configs_split.sample_listener_tuple(additional_vips=True))
self.assertEqual(
sample_configs_split.sample_base_expected_config(frontend=fe),
rendered_obj)
def test_render_template_member_backup(self):
be = ("backend sample_pool_id_1\n"
" mode http\n"

View File

@ -41,7 +41,10 @@ class TestLvsCfg(base.TestCase):
exp = ("# Configuration for Loadbalancer sample_loadbalancer_id_1\n"
"# Configuration for Listener sample_listener_id_1\n\n"
"net_namespace amphora-haproxy\n\n"
"virtual_server 10.0.0.2 80 {\n"
"virtual_server_group ipv4-group {\n"
" 10.0.0.2 80\n"
"}\n\n"
"virtual_server group ipv4-group {\n"
" lb_algo wrr\n"
" lb_kind NAT\n"
" protocol UDP\n"
@ -87,7 +90,10 @@ class TestLvsCfg(base.TestCase):
exp = ("# Configuration for Loadbalancer sample_loadbalancer_id_1\n"
"# Configuration for Listener sample_listener_id_1\n\n"
"net_namespace amphora-haproxy\n\n"
"virtual_server 10.0.0.2 80 {\n"
"virtual_server_group ipv4-group {\n"
" 10.0.0.2 80\n"
"}\n\n"
"virtual_server group ipv4-group {\n"
" lb_algo wrr\n"
" lb_kind NAT\n"
" protocol UDP\n"
@ -130,7 +136,10 @@ class TestLvsCfg(base.TestCase):
exp = ("# Configuration for Loadbalancer sample_loadbalancer_id_1\n"
"# Configuration for Listener sample_listener_id_1\n\n"
"net_namespace amphora-haproxy\n\n"
"virtual_server 10.0.0.2 80 {\n"
"virtual_server_group ipv4-group {\n"
" 10.0.0.2 80\n"
"}\n\n"
"virtual_server group ipv4-group {\n"
" lb_algo wrr\n"
" lb_kind NAT\n"
" protocol UDP\n"
@ -173,7 +182,10 @@ class TestLvsCfg(base.TestCase):
exp = ("# Configuration for Loadbalancer sample_loadbalancer_id_1\n"
"# Configuration for Listener sample_listener_id_1\n\n"
"net_namespace amphora-haproxy\n\n"
"virtual_server 10.0.0.2 80 {\n"
"virtual_server_group ipv4-group {\n"
" 10.0.0.2 80\n"
"}\n\n"
"virtual_server group ipv4-group {\n"
" lb_algo wrr\n"
" lb_kind NAT\n"
" protocol UDP\n"
@ -228,7 +240,10 @@ class TestLvsCfg(base.TestCase):
exp = ("# Configuration for Loadbalancer sample_loadbalancer_id_1\n"
"# Configuration for Listener sample_listener_id_1\n\n"
"net_namespace amphora-haproxy\n\n"
"virtual_server 10.0.0.2 80 {\n"
"virtual_server_group ipv4-group {\n"
" 10.0.0.2 80\n"
"}\n\n"
"virtual_server group ipv4-group {\n"
" lb_algo wrr\n"
" lb_kind NAT\n"
" protocol UDP\n\n\n"
@ -246,7 +261,10 @@ class TestLvsCfg(base.TestCase):
exp = ("# Configuration for Loadbalancer sample_loadbalancer_id_1\n"
"# Configuration for Listener sample_listener_id_1\n\n"
"net_namespace amphora-haproxy\n\n"
"virtual_server 10.0.0.2 80 {\n"
"virtual_server_group ipv4-group {\n"
" 10.0.0.2 80\n"
"}\n\n"
"virtual_server group ipv4-group {\n"
" lb_algo wrr\n"
" lb_kind NAT\n"
" protocol UDP\n\n\n"
@ -357,7 +375,10 @@ class TestLvsCfg(base.TestCase):
exp = ("# Configuration for Loadbalancer sample_loadbalancer_id_1\n"
"# Configuration for Listener sample_listener_id_1\n\n"
"net_namespace amphora-haproxy\n\n"
"virtual_server 10.0.0.2 80 {\n"
"virtual_server_group ipv4-group {\n"
" 10.0.0.2 80\n"
"}\n\n"
"virtual_server group ipv4-group {\n"
" lb_algo wrr\n"
" lb_kind NAT\n"
" protocol UDP\n"
@ -418,7 +439,10 @@ class TestLvsCfg(base.TestCase):
exp = ("# Configuration for Loadbalancer sample_loadbalancer_id_1\n"
"# Configuration for Listener sample_listener_id_1\n\n"
"net_namespace amphora-haproxy\n\n"
"virtual_server 10.0.0.2 80 {\n"
"virtual_server_group ipv4-group {\n"
" 10.0.0.2 80\n"
"}\n\n"
"virtual_server group ipv4-group {\n"
" lb_algo wrr\n"
" lb_kind NAT\n"
" protocol UDP\n"
@ -476,7 +500,10 @@ class TestLvsCfg(base.TestCase):
exp = ("# Configuration for Loadbalancer sample_loadbalancer_id_1\n"
"# Configuration for Listener sample_listener_id_1\n\n"
"net_namespace amphora-haproxy\n\n"
"virtual_server 10.0.0.2 80 {\n"
"virtual_server_group ipv4-group {\n"
" 10.0.0.2 80\n"
"}\n\n"
"virtual_server group ipv4-group {\n"
" lb_algo wrr\n"
" lb_kind NAT\n"
" protocol SCTP\n"
@ -522,7 +549,10 @@ class TestLvsCfg(base.TestCase):
exp = ("# Configuration for Loadbalancer sample_loadbalancer_id_1\n"
"# Configuration for Listener sample_listener_id_1\n\n"
"net_namespace amphora-haproxy\n\n"
"virtual_server 10.0.0.2 80 {\n"
"virtual_server_group ipv4-group {\n"
" 10.0.0.2 80\n"
"}\n\n"
"virtual_server group ipv4-group {\n"
" lb_algo wrr\n"
" lb_kind NAT\n"
" protocol SCTP\n"
@ -565,7 +595,10 @@ class TestLvsCfg(base.TestCase):
exp = ("# Configuration for Loadbalancer sample_loadbalancer_id_1\n"
"# Configuration for Listener sample_listener_id_1\n\n"
"net_namespace amphora-haproxy\n\n"
"virtual_server 10.0.0.2 80 {\n"
"virtual_server_group ipv4-group {\n"
" 10.0.0.2 80\n"
"}\n\n"
"virtual_server group ipv4-group {\n"
" lb_algo wrr\n"
" lb_kind NAT\n"
" protocol SCTP\n"
@ -608,7 +641,10 @@ class TestLvsCfg(base.TestCase):
exp = ("# Configuration for Loadbalancer sample_loadbalancer_id_1\n"
"# Configuration for Listener sample_listener_id_1\n\n"
"net_namespace amphora-haproxy\n\n"
"virtual_server 10.0.0.2 80 {\n"
"virtual_server_group ipv4-group {\n"
" 10.0.0.2 80\n"
"}\n\n"
"virtual_server group ipv4-group {\n"
" lb_algo wrr\n"
" lb_kind NAT\n"
" protocol SCTP\n"
@ -748,7 +784,10 @@ class TestLvsCfg(base.TestCase):
exp = ("# Configuration for Loadbalancer sample_loadbalancer_id_1\n"
"# Configuration for Listener sample_listener_id_1\n\n"
"net_namespace amphora-haproxy\n\n"
"virtual_server 10.0.0.2 80 {\n"
"virtual_server_group ipv4-group {\n"
" 10.0.0.2 80\n"
"}\n\n"
"virtual_server group ipv4-group {\n"
" lb_algo wrr\n"
" lb_kind NAT\n"
" protocol SCTP\n"
@ -809,7 +848,10 @@ class TestLvsCfg(base.TestCase):
exp = ("# Configuration for Loadbalancer sample_loadbalancer_id_1\n"
"# Configuration for Listener sample_listener_id_1\n\n"
"net_namespace amphora-haproxy\n\n"
"virtual_server 10.0.0.2 80 {\n"
"virtual_server_group ipv4-group {\n"
" 10.0.0.2 80\n"
"}\n\n"
"virtual_server group ipv4-group {\n"
" lb_algo wrr\n"
" lb_kind NAT\n"
" protocol SCTP\n"

View File

@ -384,6 +384,7 @@ RET_AMPHORA = {
'vrrp_priority': None}
RET_LB = {
'additional_vips': [],
'host_amphora': RET_AMPHORA,
'id': 'sample_loadbalancer_id_1',
'vip_address': '10.0.0.2',
@ -395,6 +396,7 @@ RET_LB = {
'amphorae': [sample_amphora_tuple()]}
RET_LB_L7 = {
'additional_vips': [],
'host_amphora': RET_AMPHORA,
'id': 'sample_loadbalancer_id_1',
'vip_address': '10.0.0.2',
@ -425,6 +427,7 @@ RET_UDP_HEALTH_MONITOR = {
RET_UDP_MEMBER = {
'id': 'member_id_1',
'address': '192.0.2.10',
'ip_version': 4,
'protocol_port': 82,
'weight': 13,
'enabled': True,
@ -435,6 +438,7 @@ RET_UDP_MEMBER = {
RET_UDP_MEMBER_MONITOR_IP_PORT = {
'id': 'member_id_1',
'address': '192.0.2.10',
'ip_version': 4,
'protocol_port': 82,
'weight': 13,
'enabled': True,
@ -445,6 +449,7 @@ RET_UDP_MEMBER_MONITOR_IP_PORT = {
UDP_MEMBER_1 = {
'id': 'sample_member_id_1',
'address': '10.0.0.99',
'ip_version': 4,
'enabled': True,
'protocol_port': 82,
'weight': 13,
@ -455,6 +460,7 @@ UDP_MEMBER_1 = {
UDP_MEMBER_2 = {
'id': 'sample_member_id_2',
'address': '10.0.0.98',
'ip_version': 4,
'enabled': True,
'protocol_port': 82,
'weight': 13,
@ -508,6 +514,7 @@ RET_SCTP_HEALTH_MONITOR = {
RET_SCTP_MEMBER = {
'id': 'member_id_1',
'address': '192.0.2.10',
'ip_version': 4,
'protocol_port': 82,
'weight': 13,
'enabled': True,
@ -518,6 +525,7 @@ RET_SCTP_MEMBER = {
RET_SCTP_MEMBER_MONITOR_IP_PORT = {
'id': 'member_id_1',
'address': '192.0.2.10',
'ip_version': 4,
'protocol_port': 82,
'weight': 13,
'enabled': True,
@ -528,6 +536,7 @@ RET_SCTP_MEMBER_MONITOR_IP_PORT = {
SCTP_MEMBER_1 = {
'id': 'sample_member_id_1',
'address': '10.0.0.99',
'ip_version': 4,
'enabled': True,
'protocol_port': 82,
'weight': 13,
@ -538,6 +547,7 @@ SCTP_MEMBER_1 = {
SCTP_MEMBER_2 = {
'id': 'sample_member_id_2',
'address': '10.0.0.98',
'ip_version': 4,
'enabled': True,
'protocol_port': 82,
'weight': 13,
@ -574,7 +584,7 @@ RET_SCTP_LISTENER = {
def sample_listener_loadbalancer_tuple(
topology=None, enabled=True, pools=None):
topology=None, enabled=True, pools=None, additional_vips=False):
if topology and topology in ['ACTIVE_STANDBY', 'ACTIVE_ACTIVE']:
more_amp = True
else:
@ -582,7 +592,7 @@ def sample_listener_loadbalancer_tuple(
topology = constants.TOPOLOGY_SINGLE
in_lb = collections.namedtuple(
'load_balancer', 'id, name, vip, amphorae, topology, '
'pools, listeners, enabled, project_id')
'pools, listeners, enabled, project_id, additional_vips')
return in_lb(
id='sample_loadbalancer_id_1',
name='test-lb',
@ -599,6 +609,9 @@ def sample_listener_loadbalancer_tuple(
listeners=[],
enabled=enabled,
project_id='12345',
additional_vips=[sample_vip_tuple('10.0.1.2'),
sample_vip_tuple('2001:db8::2')]
if additional_vips else []
)
@ -618,7 +631,7 @@ def sample_lb_with_udp_listener_tuple(
in_lb = collections.namedtuple(
'load_balancer', 'id, name, vip, amphorae, topology, '
'pools, enabled, project_id, listeners')
'pools, enabled, project_id, listeners, additional_vips')
return in_lb(
id='sample_loadbalancer_id_1',
name='test-lb',
@ -634,7 +647,8 @@ def sample_lb_with_udp_listener_tuple(
listeners=listeners,
pools=pools or [],
enabled=enabled,
project_id='12345'
project_id='12345',
additional_vips=[]
)
@ -688,7 +702,8 @@ def sample_listener_tuple(proto=None, monitor=True, alloc_default_pool=True,
sample_default_pool=1,
pool_enabled=True,
backend_alpn_protocols=constants.
AMPHORA_SUPPORTED_ALPN_PROTOCOLS):
AMPHORA_SUPPORTED_ALPN_PROTOCOLS,
additional_vips=False):
proto = 'HTTP' if proto is None else proto
if be_proto is None:
be_proto = 'HTTP' if proto == 'TERMINATED_HTTPS' else proto
@ -775,7 +790,7 @@ def sample_listener_tuple(proto=None, monitor=True, alloc_default_pool=True,
protocol_port=port,
protocol=proto,
load_balancer=sample_listener_loadbalancer_tuple(
topology=topology, pools=pools),
topology=topology, pools=pools, additional_vips=additional_vips),
peer_port=peer_port,
default_pool=sample_pool_tuple(
listener_id='sample_listener_id_1',

View File

@ -383,6 +383,7 @@ RET_AMPHORA = {
'vrrp_priority': None}
RET_LB = {
'additional_vips': [],
'host_amphora': RET_AMPHORA,
'id': 'sample_loadbalancer_id_1',
'vip_address': '10.0.0.2',
@ -392,6 +393,7 @@ RET_LB = {
'global_connection_limit': constants.HAPROXY_MAX_MAXCONN}
RET_LB_L7 = {
'additional_vips': [],
'host_amphora': RET_AMPHORA,
'id': 'sample_loadbalancer_id_1',
'vip_address': '10.0.0.2',
@ -521,7 +523,7 @@ def sample_loadbalancer_tuple(proto=None, monitor=True, persistence=True,
def sample_listener_loadbalancer_tuple(proto=None, topology=None,
enabled=True):
enabled=True, additional_vips=False):
proto = 'HTTP' if proto is None else proto
if topology and topology in ['ACTIVE_STANDBY', 'ACTIVE_ACTIVE']:
more_amp = True
@ -530,7 +532,7 @@ def sample_listener_loadbalancer_tuple(proto=None, topology=None,
topology = constants.TOPOLOGY_SINGLE
in_lb = collections.namedtuple(
'load_balancer', 'id, name, protocol, vip, amphorae, topology, '
'listeners, enabled, project_id')
'listeners, enabled, project_id, additional_vips')
return in_lb(
id='sample_loadbalancer_id_1',
name='test-lb',
@ -546,7 +548,10 @@ def sample_listener_loadbalancer_tuple(proto=None, topology=None,
topology=topology,
listeners=[],
enabled=enabled,
project_id='12345'
project_id='12345',
additional_vips=[sample_vip_tuple('10.0.1.2'),
sample_vip_tuple('2001:db8::2')]
if additional_vips else []
)
@ -627,7 +632,8 @@ def sample_listener_tuple(proto=None, monitor=True, alloc_default_pool=True,
pool_ca_cert=False, pool_crl=False,
tls_enabled=False, hm_host_http_check=False,
id='sample_listener_id_1', recursive_nest=False,
provisioning_status=constants.ACTIVE):
provisioning_status=constants.ACTIVE,
additional_vips=False):
proto = 'HTTP' if proto is None else proto
if be_proto is None:
be_proto = 'HTTP' if proto == 'TERMINATED_HTTPS' else proto
@ -691,8 +697,8 @@ def sample_listener_tuple(proto=None, monitor=True, alloc_default_pool=True,
project_id='12345',
protocol_port=port,
protocol=proto,
load_balancer=sample_listener_loadbalancer_tuple(proto=proto,
topology=topology),
load_balancer=sample_listener_loadbalancer_tuple(
proto=proto, topology=topology, additional_vips=additional_vips),
peer_port=peer_port,
default_pool=sample_pool_tuple(
proto=be_proto, monitor=monitor, persistence=persistence,

View File

@ -199,12 +199,13 @@ class TestLoadBalancerFlows(base.TestCase):
self.assertIn(constants.DELTAS, create_flow.provides)
self.assertIn(constants.UPDATED_PORTS, create_flow.provides)
self.assertIn(constants.VIP, create_flow.provides)
self.assertIn(constants.ADDITIONAL_VIPS, create_flow.provides)
self.assertIn(constants.AMP_DATA, create_flow.provides)
self.assertIn(constants.SERVER_PEM, create_flow.provides)
self.assertIn(constants.AMPHORA_NETWORK_CONFIG, create_flow.provides)
self.assertEqual(7, len(create_flow.requires))
self.assertEqual(13, len(create_flow.provides))
self.assertEqual(14, len(create_flow.provides))
def test_get_create_load_balancer_flows_active_standby_listeners(
self, mock_get_net_driver):
@ -226,12 +227,13 @@ class TestLoadBalancerFlows(base.TestCase):
self.assertIn(constants.DELTAS, create_flow.provides)
self.assertIn(constants.UPDATED_PORTS, create_flow.provides)
self.assertIn(constants.VIP, create_flow.provides)
self.assertIn(constants.ADDITIONAL_VIPS, create_flow.provides)
self.assertIn(constants.AMP_DATA, create_flow.provides)
self.assertIn(constants.AMPHORAE_NETWORK_CONFIG,
create_flow.provides)
self.assertEqual(6, len(create_flow.requires))
self.assertEqual(16, len(create_flow.provides),
self.assertEqual(17, len(create_flow.provides),
create_flow.provides)
def _test_get_failover_LB_flow_single(self, amphorae):
@ -261,11 +263,12 @@ class TestLoadBalancerFlows(base.TestCase):
self.assertIn(constants.LOADBALANCER, failover_flow.provides)
self.assertIn(constants.SERVER_PEM, failover_flow.provides)
self.assertIn(constants.VIP, failover_flow.provides)
self.assertIn(constants.ADDITIONAL_VIPS, failover_flow.provides)
self.assertIn(constants.VIP_SG_ID, failover_flow.provides)
self.assertEqual(6, len(failover_flow.requires),
failover_flow.requires)
self.assertEqual(12, len(failover_flow.provides),
self.assertEqual(13, len(failover_flow.provides),
failover_flow.provides)
def test_get_failover_LB_flow_no_amps_single(self, mock_get_net_driver):
@ -336,11 +339,12 @@ class TestLoadBalancerFlows(base.TestCase):
self.assertIn(constants.LOADBALANCER, failover_flow.provides)
self.assertIn(constants.SERVER_PEM, failover_flow.provides)
self.assertIn(constants.VIP, failover_flow.provides)
self.assertIn(constants.ADDITIONAL_VIPS, failover_flow.provides)
self.assertIn(constants.VIP_SG_ID, failover_flow.provides)
self.assertEqual(6, len(failover_flow.requires),
failover_flow.requires)
self.assertEqual(16, len(failover_flow.provides),
self.assertEqual(17, len(failover_flow.provides),
failover_flow.provides)
def test_get_failover_LB_flow_no_amps_act_stdby(self, mock_get_net_driver):

View File

@ -1317,22 +1317,24 @@ class TestNetworkTasks(base.TestCase):
mock_get_net_driver.return_value = mock_driver
net = network_tasks.AllocateVIP()
mock_driver.allocate_vip.return_value = LB.vip
mock_driver.allocate_vip.return_value = LB.vip, []
mock_driver.reset_mock()
self.assertEqual(LB.vip, net.execute(LB))
self.assertEqual((LB.vip, []), net.execute(LB))
mock_driver.allocate_vip.assert_called_once_with(LB)
# revert
vip_mock = mock.MagicMock()
net.revert(vip_mock, LB)
additional_vips_mock = mock.MagicMock()
net.revert((vip_mock, additional_vips_mock), LB)
mock_driver.deallocate_vip.assert_called_once_with(vip_mock)
# revert exception
mock_driver.reset_mock()
vip_mock.reset_mock()
additional_vips_mock.reset_mock()
mock_driver.deallocate_vip.side_effect = Exception('DeallVipException')
vip_mock = mock.MagicMock()
net.revert(vip_mock, LB)
net.revert((vip_mock, additional_vips_mock), LB)
mock_driver.deallocate_vip.assert_called_once_with(vip_mock)
def test_allocate_vip_for_failover(self, mock_get_net_driver):
@ -1340,10 +1342,10 @@ class TestNetworkTasks(base.TestCase):
mock_get_net_driver.return_value = mock_driver
net = network_tasks.AllocateVIPforFailover()
mock_driver.allocate_vip.return_value = LB.vip
mock_driver.allocate_vip.return_value = LB.vip, []
mock_driver.reset_mock()
self.assertEqual(LB.vip, net.execute(LB))
self.assertEqual((LB.vip, []), net.execute(LB))
mock_driver.allocate_vip.assert_called_once_with(LB)
# revert

View File

@ -229,10 +229,10 @@ class TestLoadBalancerFlows(base.TestCase):
self.assertIn(constants.UPDATED_PORTS, create_flow.provides)
self.assertIn(constants.SERVER_PEM, create_flow.provides)
self.assertIn(constants.VIP, create_flow.provides)
self.assertIn(constants.ADDITIONAL_VIPS, create_flow.provides)
self.assertEqual(7, len(create_flow.requires))
self.assertEqual(13, len(create_flow.provides),
create_flow.provides)
self.assertEqual(14, len(create_flow.provides))
def test_get_create_load_balancer_flows_active_standby_listeners(
self, mock_get_net_driver):
@ -265,9 +265,10 @@ class TestLoadBalancerFlows(base.TestCase):
self.assertIn(constants.SERVER_PEM, create_flow.provides)
self.assertIn(constants.SUBNET, create_flow.provides)
self.assertIn(constants.VIP, create_flow.provides)
self.assertIn(constants.ADDITIONAL_VIPS, create_flow.provides)
self.assertEqual(6, len(create_flow.requires), create_flow.requires)
self.assertEqual(16, len(create_flow.provides),
self.assertEqual(17, len(create_flow.provides),
create_flow.provides)
def _test_get_failover_LB_flow_single(self, amphorae):
@ -297,11 +298,12 @@ class TestLoadBalancerFlows(base.TestCase):
self.assertIn(constants.LOADBALANCER, failover_flow.provides)
self.assertIn(constants.SERVER_PEM, failover_flow.provides)
self.assertIn(constants.VIP, failover_flow.provides)
self.assertIn(constants.ADDITIONAL_VIPS, failover_flow.provides)
self.assertIn(constants.VIP_SG_ID, failover_flow.provides)
self.assertEqual(6, len(failover_flow.requires),
failover_flow.requires)
self.assertEqual(12, len(failover_flow.provides),
self.assertEqual(13, len(failover_flow.provides),
failover_flow.provides)
def test_get_failover_LB_flow_no_amps_single(self, mock_get_net_driver):
@ -360,11 +362,12 @@ class TestLoadBalancerFlows(base.TestCase):
self.assertIn(constants.LOADBALANCER, failover_flow.provides)
self.assertIn(constants.SERVER_PEM, failover_flow.provides)
self.assertIn(constants.VIP, failover_flow.provides)
self.assertIn(constants.ADDITIONAL_VIPS, failover_flow.provides)
self.assertIn(constants.VIP_SG_ID, failover_flow.provides)
self.assertEqual(6, len(failover_flow.requires),
failover_flow.requires)
self.assertEqual(12, len(failover_flow.provides),
self.assertEqual(13, len(failover_flow.provides),
failover_flow.provides)
def test_get_failover_LB_flow_no_amps_act_stdby(self, mock_get_net_driver):

View File

@ -544,6 +544,7 @@ class TestAmphoraDriverTasks(base.TestCase):
'host_routes': []
},
constants.VRRP_PORT: mock.MagicMock(),
'additional_vip_data': []
}
}
mock_amphora_repo_get.return_value = _db_amphora_mock
@ -559,7 +560,8 @@ class TestAmphoraDriverTasks(base.TestCase):
mock_driver.post_vip_plug.assert_called_once_with(
_db_amphora_mock, _db_load_balancer_mock, amphorae_net_config_mock,
vip_subnet=vip_subnet, vrrp_port=vrrp_port)
vip_subnet=vip_subnet, vrrp_port=vrrp_port,
additional_vip_data=[])
# Test revert
amp = amphora_post_vip_plug_obj.revert(None, _amphora_mock, _LB_mock)
@ -616,6 +618,7 @@ class TestAmphoraDriverTasks(base.TestCase):
'host_routes': host_routes
},
constants.VRRP_PORT: mock.MagicMock(),
'additional_vip_data': []
}
}
mock_amphora_repo_get.return_value = _db_amphora_mock
@ -631,7 +634,8 @@ class TestAmphoraDriverTasks(base.TestCase):
mock_driver.post_vip_plug.assert_called_once_with(
_db_amphora_mock, _db_load_balancer_mock, amphorae_net_config_mock,
vip_subnet=vip_subnet, vrrp_port=vrrp_port)
vip_subnet=vip_subnet, vrrp_port=vrrp_port,
additional_vip_data=[])
call_kwargs = mock_driver.post_vip_plug.call_args[1]
vip_subnet_arg = call_kwargs.get(constants.VIP_SUBNET)
@ -645,6 +649,77 @@ class TestAmphoraDriverTasks(base.TestCase):
amphorae_net_config_mock[AMP_ID][
constants.VIP_SUBNET]['host_routes'])
@mock.patch('octavia.db.repositories.LoadBalancerRepository.update')
@mock.patch('octavia.db.repositories.LoadBalancerRepository.get')
def test_amphora_post_vip_plug_with_additional_vips(
self, mock_lb_get, mock_loadbalancer_repo_update, mock_driver,
mock_generate_uuid, mock_log, mock_get_session,
mock_listener_repo_get, mock_listener_repo_update,
mock_amphora_repo_get, mock_amphora_repo_update):
host_routes = [{'destination': '10.0.0.0/16',
'nexthop': '192.168.10.3'},
{'destination': '10.2.0.0/16',
'nexthop': '192.168.10.5'}]
additional_host_routes = [{'destination': '2001:db9::/64',
'nexthop': '2001:db8::1:fff'}]
amphorae_net_config_mock = {
AMP_ID: {
constants.VIP_SUBNET: {
'host_routes': host_routes
},
constants.VRRP_PORT: mock.MagicMock(),
'additional_vip_data': [{
'ip_address': '2001:db8::3',
'subnet': {
'host_routes': additional_host_routes
}
}]
}
}
mock_amphora_repo_get.return_value = _db_amphora_mock
mock_lb_get.return_value = _db_load_balancer_mock
amphora_post_vip_plug_obj = amphora_driver_tasks.AmphoraPostVIPPlug()
amphora_post_vip_plug_obj.execute(_amphora_mock,
_LB_mock,
amphorae_net_config_mock)
vip_subnet = network_data_models.Subnet(
**amphorae_net_config_mock[AMP_ID]['vip_subnet'])
vrrp_port = network_data_models.Port(
**amphorae_net_config_mock[AMP_ID]['vrrp_port'])
additional_vip_data = [
network_data_models.AdditionalVipData(
ip_address=add_vip_data['ip_address'],
subnet=network_data_models.Subnet(
host_routes=add_vip_data['subnet']['host_routes']))
for add_vip_data in amphorae_net_config_mock[
AMP_ID]['additional_vip_data']]
mock_driver.post_vip_plug.assert_called_once_with(
_db_amphora_mock, _db_load_balancer_mock, amphorae_net_config_mock,
vip_subnet=vip_subnet, vrrp_port=vrrp_port,
additional_vip_data=additional_vip_data)
call_kwargs = mock_driver.post_vip_plug.call_args[1]
vip_subnet_arg = call_kwargs.get(constants.VIP_SUBNET)
self.assertEqual(2, len(vip_subnet_arg.host_routes))
for hr1, hr2 in zip(host_routes, vip_subnet_arg.host_routes):
self.assertEqual(hr1['destination'], hr2.destination)
self.assertEqual(hr1['nexthop'], hr2.nexthop)
self.assertEqual(
host_routes,
amphorae_net_config_mock[AMP_ID][
constants.VIP_SUBNET]['host_routes'])
add_vip_data_arg = call_kwargs.get('additional_vip_data')
self.assertEqual(1, len(add_vip_data_arg[0].subnet.host_routes))
hr1 = add_vip_data_arg[0].subnet.host_routes[0]
self.assertEqual(
additional_host_routes[0]['destination'], hr1.destination)
self.assertEqual(
additional_host_routes[0]['nexthop'], hr1.nexthop)
@mock.patch('octavia.db.repositories.LoadBalancerRepository.update')
@mock.patch('octavia.db.repositories.LoadBalancerRepository.get')
def test_amphorae_post_vip_plug(self, mock_lb_get,
@ -672,7 +747,8 @@ class TestAmphoraDriverTasks(base.TestCase):
mock_driver.post_vip_plug.assert_called_once_with(
_db_amphora_mock, _db_load_balancer_mock, amphorae_net_config_mock,
vip_subnet=vip_subnet, vrrp_port=vrrp_port)
vip_subnet=vip_subnet, vrrp_port=vrrp_port,
additional_vip_data=[])
# Test revert
amp = amphora_post_vip_plug_obj.revert(None, _LB_mock)

View File

@ -1372,24 +1372,26 @@ class TestNetworkTasks(base.TestCase):
mock_get_net_driver.return_value = mock_driver
net = network_tasks.AllocateVIP()
mock_driver.allocate_vip.return_value = LB.vip
mock_driver.allocate_vip.return_value = LB.vip, []
mock_driver.reset_mock()
self.assertEqual(LB.vip.to_dict(),
self.assertEqual((LB.vip.to_dict(), []),
net.execute(self.load_balancer_mock))
mock_driver.allocate_vip.assert_called_once_with(LB)
# revert
vip_mock = VIP.to_dict()
net.revert(vip_mock, self.load_balancer_mock)
additional_vips_mock = mock.MagicMock()
net.revert((vip_mock, additional_vips_mock), self.load_balancer_mock)
mock_driver.deallocate_vip.assert_called_once_with(
o_data_models.Vip(**vip_mock))
# revert exception
mock_driver.reset_mock()
additional_vips_mock.reset_mock()
mock_driver.deallocate_vip.side_effect = Exception('DeallVipException')
vip_mock = VIP.to_dict()
net.revert(vip_mock, self.load_balancer_mock)
net.revert((vip_mock, additional_vips_mock), self.load_balancer_mock)
mock_driver.deallocate_vip.assert_called_once_with(o_data_models.Vip(
**vip_mock))
@ -1402,10 +1404,10 @@ class TestNetworkTasks(base.TestCase):
mock_get_net_driver.return_value = mock_driver
net = network_tasks.AllocateVIPforFailover()
mock_driver.allocate_vip.return_value = LB.vip
mock_driver.allocate_vip.return_value = LB.vip, []
mock_driver.reset_mock()
self.assertEqual(LB.vip.to_dict(),
self.assertEqual((LB.vip.to_dict(), []),
net.execute(self.load_balancer_mock))
mock_driver.allocate_vip.assert_called_once_with(LB)

View File

@ -459,6 +459,35 @@ class TestAllowedAddressPairsDriver(base.TestCase):
lb, lb.vip, lb.amphorae[0], subnet)
self.driver.neutron_client.delete_port.assert_called_once()
def test_plug_aap_port_with_add_vips(self):
additional_vips = [
{'ip_address': t_constants.MOCK_IP_ADDRESS2,
'subnet_id': t_constants.MOCK_VIP_SUBNET_ID2}
]
lb = dmh.generate_load_balancer_tree(additional_vips=additional_vips)
subnet = network_models.Subnet(id=t_constants.MOCK_VIP_SUBNET_ID,
network_id=t_constants.MOCK_VIP_NET_ID)
list_ports = self.driver.neutron_client.list_ports
port1 = t_constants.MOCK_MANAGEMENT_PORT1['port']
port2 = t_constants.MOCK_MANAGEMENT_PORT2['port']
list_ports.side_effect = [{'ports': [port1]}, {'ports': [port2]}]
network_attach = self.driver.compute.attach_network_or_port
network_attach.side_effect = [t_constants.MOCK_VRRP_INTERFACE1]
update_port = self.driver.neutron_client.update_port
amp = self.driver.plug_aap_port(lb, lb.vip, lb.amphorae[0], subnet)
expected_aap = {
'port': {
'allowed_address_pairs':
[{'ip_address': lb.vip.ip_address},
{'ip_address': lb.additional_vips[0].ip_address}]}}
update_port.assert_any_call(amp.vrrp_port_id, expected_aap)
self.assertIn(amp.vrrp_ip, [t_constants.MOCK_VRRP_IP1,
t_constants.MOCK_VRRP_IP2])
self.assertEqual(lb.vip.ip_address, amp.ha_ip)
def _set_safely(self, obj, name, value):
if isinstance(obj, dict):
current = obj.get(name)
@ -535,12 +564,13 @@ class TestAllowedAddressPairsDriver(base.TestCase):
network_id=t_constants.MOCK_NETWORK_ID,
ip_address=t_constants.MOCK_IP_ADDRESS)
fake_lb = data_models.LoadBalancer(id='1', vip=fake_lb_vip)
vip = self.driver.allocate_vip(fake_lb)
vip, additional_vips = self.driver.allocate_vip(fake_lb)
self.assertIsInstance(vip, data_models.Vip)
self.assertEqual(t_constants.MOCK_IP_ADDRESS, vip.ip_address)
self.assertEqual(t_constants.MOCK_SUBNET_ID, vip.subnet_id)
self.assertEqual(t_constants.MOCK_PORT_ID, vip.port_id)
self.assertEqual(fake_lb.id, vip.load_balancer_id)
self.assertFalse(additional_vips)
@mock.patch('octavia.network.drivers.neutron.base.BaseNeutronDriver.'
'_check_extension_enabled', return_value=True)
@ -568,7 +598,7 @@ class TestAllowedAddressPairsDriver(base.TestCase):
octavia_owned=True)
fake_lb = data_models.LoadBalancer(id='1', vip=fake_lb_vip,
project_id='test-project')
vip = self.driver.allocate_vip(fake_lb)
vip, additional_vips = self.driver.allocate_vip(fake_lb)
exp_create_port_call = {
'port': {
'name': 'octavia-lb-1',
@ -588,6 +618,7 @@ class TestAllowedAddressPairsDriver(base.TestCase):
self.assertEqual(t_constants.MOCK_SUBNET_ID, vip.subnet_id)
self.assertEqual(t_constants.MOCK_PORT_ID, vip.port_id)
self.assertEqual(fake_lb.id, vip.load_balancer_id)
self.assertFalse(additional_vips)
@mock.patch('octavia.network.drivers.neutron.base.BaseNeutronDriver.'
'get_port', side_effect=network_base.PortNotFound)
@ -611,7 +642,7 @@ class TestAllowedAddressPairsDriver(base.TestCase):
port_id=t_constants.MOCK_PORT_ID)
fake_lb = data_models.LoadBalancer(id='1', vip=fake_lb_vip,
project_id='test-project')
vip = self.driver.allocate_vip(fake_lb)
vip, additional_vips = self.driver.allocate_vip(fake_lb)
exp_create_port_call = {
'port': {
'name': 'octavia-lb-1',
@ -629,6 +660,7 @@ class TestAllowedAddressPairsDriver(base.TestCase):
self.assertEqual(t_constants.MOCK_SUBNET_ID, vip.subnet_id)
self.assertEqual(t_constants.MOCK_PORT_ID, vip.port_id)
self.assertEqual(fake_lb.id, vip.load_balancer_id)
self.assertFalse(additional_vips)
@mock.patch('octavia.network.drivers.neutron.base.BaseNeutronDriver.'
'get_port', side_effect=Exception('boom'))
@ -668,10 +700,11 @@ class TestAllowedAddressPairsDriver(base.TestCase):
'network_id': t_constants.MOCK_NETWORK_ID
}}
fake_lb_vip = data_models.Vip(subnet_id=t_constants.MOCK_SUBNET_ID,
network_id=t_constants.MOCK_NETWORK_ID)
network_id=t_constants.MOCK_NETWORK_ID,
ip_address=t_constants.MOCK_IP_ADDRESS)
fake_lb = data_models.LoadBalancer(id='1', vip=fake_lb_vip,
project_id='test-project')
vip = self.driver.allocate_vip(fake_lb)
vip, additional_vips = self.driver.allocate_vip(fake_lb)
exp_create_port_call = {
'port': {
'name': 'octavia-lb-1',
@ -680,7 +713,8 @@ class TestAllowedAddressPairsDriver(base.TestCase):
'device_owner': allowed_address_pairs.OCTAVIA_OWNER,
'admin_state_up': False,
'project_id': 'test-project',
'fixed_ips': [{'subnet_id': t_constants.MOCK_SUBNET_ID}]
'fixed_ips': [{'ip_address': t_constants.MOCK_IP_ADDRESS,
'subnet_id': t_constants.MOCK_SUBNET_ID}]
}
}
create_port.assert_called_once_with(exp_create_port_call)
@ -689,6 +723,7 @@ class TestAllowedAddressPairsDriver(base.TestCase):
self.assertEqual(t_constants.MOCK_SUBNET_ID, vip.subnet_id)
self.assertEqual(t_constants.MOCK_PORT_ID, vip.port_id)
self.assertEqual(fake_lb.id, vip.load_balancer_id)
self.assertFalse(additional_vips)
@mock.patch('octavia.network.drivers.neutron.base.BaseNeutronDriver.'
'_check_extension_enabled', return_value=True)
@ -709,7 +744,7 @@ class TestAllowedAddressPairsDriver(base.TestCase):
ip_address=t_constants.MOCK_IP_ADDRESS)
fake_lb = data_models.LoadBalancer(id='1', vip=fake_lb_vip,
project_id='test-project')
vip = self.driver.allocate_vip(fake_lb)
vip, additional_vips = self.driver.allocate_vip(fake_lb)
exp_create_port_call = {
'port': {
'name': 'octavia-lb-1',
@ -728,6 +763,7 @@ class TestAllowedAddressPairsDriver(base.TestCase):
self.assertEqual(t_constants.MOCK_SUBNET_ID, vip.subnet_id)
self.assertEqual(t_constants.MOCK_PORT_ID, vip.port_id)
self.assertEqual(fake_lb.id, vip.load_balancer_id)
self.assertFalse(additional_vips)
@mock.patch('octavia.network.drivers.neutron.base.BaseNeutronDriver.'
'_check_extension_enabled', return_value=True)
@ -746,7 +782,7 @@ class TestAllowedAddressPairsDriver(base.TestCase):
fake_lb_vip = data_models.Vip(network_id=t_constants.MOCK_NETWORK_ID)
fake_lb = data_models.LoadBalancer(id='1', vip=fake_lb_vip,
project_id='test-project')
vip = self.driver.allocate_vip(fake_lb)
vip, additional_vips = self.driver.allocate_vip(fake_lb)
exp_create_port_call = {
'port': {
'name': 'octavia-lb-1',
@ -760,6 +796,7 @@ class TestAllowedAddressPairsDriver(base.TestCase):
self.assertIsInstance(vip, data_models.Vip)
self.assertEqual(t_constants.MOCK_PORT_ID, vip.port_id)
self.assertEqual(fake_lb.id, vip.load_balancer_id)
self.assertTrue(additional_vips)
@mock.patch('octavia.network.drivers.neutron.base.BaseNeutronDriver.'
'_check_extension_enabled', return_value=False)
@ -776,10 +813,11 @@ class TestAllowedAddressPairsDriver(base.TestCase):
'network_id': t_constants.MOCK_NETWORK_ID
}}
fake_lb_vip = data_models.Vip(subnet_id=t_constants.MOCK_SUBNET_ID,
network_id=t_constants.MOCK_NETWORK_ID)
network_id=t_constants.MOCK_NETWORK_ID,
ip_address=t_constants.MOCK_IP_ADDRESS)
fake_lb = data_models.LoadBalancer(id='1', vip=fake_lb_vip,
project_id='test-project')
vip = self.driver.allocate_vip(fake_lb)
vip, additional_vips = self.driver.allocate_vip(fake_lb)
exp_create_port_call = {
'port': {
'name': 'octavia-lb-1',
@ -788,7 +826,8 @@ class TestAllowedAddressPairsDriver(base.TestCase):
'device_owner': allowed_address_pairs.OCTAVIA_OWNER,
'admin_state_up': False,
'tenant_id': 'test-project',
'fixed_ips': [{'subnet_id': t_constants.MOCK_SUBNET_ID}]
'fixed_ips': [{'ip_address': t_constants.MOCK_IP_ADDRESS,
'subnet_id': t_constants.MOCK_SUBNET_ID}]
}
}
create_port.assert_called_once_with(exp_create_port_call)
@ -797,6 +836,7 @@ class TestAllowedAddressPairsDriver(base.TestCase):
self.assertEqual(t_constants.MOCK_SUBNET_ID, vip.subnet_id)
self.assertEqual(t_constants.MOCK_PORT_ID, vip.port_id)
self.assertEqual(fake_lb.id, vip.load_balancer_id)
self.assertFalse(additional_vips)
def test_unplug_aap_port_errors_when_update_port_cant_find_port(self):
lb = dmh.generate_load_balancer_tree()

View File

@ -74,8 +74,8 @@ class TestBaseNeutronNetworkDriver(base.TestCase):
self.assertNotIn(mock.call('TEST2'), show_extension.mock_calls)
def test__add_allowed_address_pair_to_port(self):
self.driver._add_allowed_address_pair_to_port(
t_constants.MOCK_PORT_ID, t_constants.MOCK_IP_ADDRESS)
self.driver._add_allowed_address_pairs_to_port(
t_constants.MOCK_PORT_ID, [t_constants.MOCK_IP_ADDRESS])
expected_aap_dict = {
'port': {
'allowed_address_pairs': [
@ -156,9 +156,11 @@ class TestBaseNeutronNetworkDriver(base.TestCase):
def test__port_to_vip(self):
lb = dmh.generate_load_balancer_tree()
lb.vip.subnet_id = t_constants.MOCK_SUBNET_ID
lb.vip.ip_address = t_constants.MOCK_IP_ADDRESS
port = utils.convert_port_dict_to_model(t_constants.MOCK_NEUTRON_PORT)
vip = self.driver._port_to_vip(port, lb)
vip, additional_vips = self.driver._port_to_vip(port, lb)
self.assertIsInstance(vip, data_models.Vip)
self.assertIsInstance(additional_vips, list)
self.assertEqual(t_constants.MOCK_IP_ADDRESS, vip.ip_address)
self.assertEqual(t_constants.MOCK_SUBNET_ID, vip.subnet_id)
self.assertEqual(t_constants.MOCK_PORT_ID, vip.port_id)

View File

@ -167,56 +167,79 @@ class TestNoopNetworkDriver(base.TestCase):
)])
def test_get_network(self):
self.driver.get_network(self.network_id)
network = self.driver.get_network(self.network_id)
self.assertEqual(
(self.network_id, 'get_network'),
self.driver.driver.networkconfigconfig[self.network_id]
)
self.assertEqual(self.network_id, network.id)
network_again = self.driver.get_network(self.network_id)
self.assertEqual(network, network_again)
def test_get_subnet(self):
self.driver.get_subnet(self.subnet_id)
subnet = self.driver.get_subnet(self.subnet_id)
self.assertEqual(
(self.subnet_id, 'get_subnet'),
self.driver.driver.networkconfigconfig[self.subnet_id]
)
self.assertEqual(self.subnet_id, subnet.id)
subnet_again = self.driver.get_subnet(self.subnet_id)
self.assertEqual(subnet, subnet_again)
def test_get_port(self):
self.driver.get_port(self.port_id)
port = self.driver.get_port(self.port_id)
self.assertEqual(
(self.port_id, 'get_port'),
self.driver.driver.networkconfigconfig[self.port_id]
)
self.assertEqual(self.port_id, port.id)
port_again = self.driver.get_port(self.port_id)
self.assertEqual(port, port_again)
def test_get_network_by_name(self):
self.driver.get_network_by_name(self.network_name)
network = self.driver.get_network_by_name(self.network_name)
self.assertEqual(
(self.network_name, 'get_network_by_name'),
self.driver.driver.networkconfigconfig[self.network_name]
)
self.assertEqual(self.network_name, network.name)
network_again = self.driver.get_network_by_name(self.network_name)
self.assertEqual(network, network_again)
def test_get_subnet_by_name(self):
self.driver.get_subnet_by_name(self.subnet_name)
subnet = self.driver.get_subnet_by_name(self.subnet_name)
self.assertEqual(
(self.subnet_name, 'get_subnet_by_name'),
self.driver.driver.networkconfigconfig[self.subnet_name]
)
self.assertEqual(self.subnet_name, subnet.name)
subnet_again = self.driver.get_subnet_by_name(self.subnet_name)
self.assertEqual(subnet, subnet_again)
def test_get_port_by_name(self):
self.driver.get_port_by_name(self.port_name)
port = self.driver.get_port_by_name(self.port_name)
self.assertEqual(
(self.port_name, 'get_port_by_name'),
self.driver.driver.networkconfigconfig[self.port_name]
)
self.assertEqual(self.port_name, port.name)
port_again = self.driver.get_port_by_name(self.port_name)
self.assertEqual(port, port_again)
def test_get_port_by_net_id_device_id(self):
self.driver.get_port_by_net_id_device_id(self.network_id,
self.device_id)
port = self.driver.get_port_by_net_id_device_id(
self.network_id, self.device_id)
self.assertEqual(
(self.network_id, self.device_id,
'get_port_by_net_id_device_id'),
self.driver.driver.networkconfigconfig[(self.network_id,
self.device_id)]
)
self.assertEqual(self.network_id, port.network_id)
self.assertEqual(self.device_id, port.device_id)
port_again = self.driver.get_port_by_net_id_device_id(
self.network_id, self.device_id)
self.assertEqual(port, port_again)
def test_get_security_group(self):
FAKE_SG_NAME = 'fake_sg_name'

View File

@ -0,0 +1,12 @@
---
features:
- |
It is now possible to create a loadbalancer with more than one VIP. There
is a new structure ``additional_vips`` in the create body, which allows a
subnet, and optionally an IP, to be specified. All VIP subnets must be part
of the same network.
upgrade:
- |
To support multi-VIP loadbalancers, a new amphora image must be built. The
new image is safe to upload before the upgrade, as it is fully backwards
compatible.