Add listener and pool protocol validation
The pool and listener can't be combined arbitrarily. We need to add some constraints in protocol side. Story: 2003500 Tasks: 24777 Co-Authored-By: Carlos Goncalves <cgoncalves@redhat.com> Change-Id: Ifed862639d3fc3de23ace4c7ceaea1a4eca62749 (cherry picked from commit47e0ef31bc
) (cherry picked from commit87704d42f4
) (cherry picked from commit1f37a73eb9
) (cherry picked from commit34545524b6
)
This commit is contained in:
parent
ea4ae88378
commit
e3ddec776d
|
@ -233,13 +233,15 @@ created_at:
|
||||||
type: string
|
type: string
|
||||||
default_pool_id:
|
default_pool_id:
|
||||||
description: |
|
description: |
|
||||||
The ID of the pool used by the listener if no L7 policies match.
|
The ID of the pool used by the listener if no L7 policies match. The pool
|
||||||
|
has some restrictions. See :ref:`valid_protocol`.
|
||||||
in: body
|
in: body
|
||||||
required: true
|
required: true
|
||||||
type: uuid
|
type: uuid
|
||||||
default_pool_id-optional:
|
default_pool_id-optional:
|
||||||
description: |
|
description: |
|
||||||
The ID of the pool used by the listener if no L7 policies match.
|
The ID of the pool used by the listener if no L7 policies match. The pool
|
||||||
|
has some restrictions. See :ref:`valid_protocol`.
|
||||||
in: body
|
in: body
|
||||||
required: false
|
required: false
|
||||||
type: uuid
|
type: uuid
|
||||||
|
@ -492,14 +494,16 @@ l7policy-position-optional:
|
||||||
l7policy-redirect-pool_id:
|
l7policy-redirect-pool_id:
|
||||||
description: |
|
description: |
|
||||||
Requests matching this policy will be redirected to the pool with this ID.
|
Requests matching this policy will be redirected to the pool with this ID.
|
||||||
Only valid if ``action`` is ``REDIRECT_TO_POOL``.
|
Only valid if ``action`` is ``REDIRECT_TO_POOL``. The pool has some
|
||||||
|
restrictions, See :ref:`valid_protocol`.
|
||||||
in: body
|
in: body
|
||||||
required: true
|
required: true
|
||||||
type: uuid
|
type: uuid
|
||||||
l7policy-redirect-pool_id-optional:
|
l7policy-redirect-pool_id-optional:
|
||||||
description: |
|
description: |
|
||||||
Requests matching this policy will be redirected to the pool with this ID.
|
Requests matching this policy will be redirected to the pool with this ID.
|
||||||
Only valid if ``action`` is ``REDIRECT_TO_POOL``.
|
Only valid if ``action`` is ``REDIRECT_TO_POOL``. The pool has some
|
||||||
|
restrictions, See :ref:`valid_protocol`.
|
||||||
in: body
|
in: body
|
||||||
required: false
|
required: false
|
||||||
type: uuid
|
type: uuid
|
||||||
|
@ -639,7 +643,8 @@ listener-id:
|
||||||
listener-id-pool-optional:
|
listener-id-pool-optional:
|
||||||
description: |
|
description: |
|
||||||
The ID of the listener for the pool. Either ``listener_id`` or
|
The ID of the listener for the pool. Either ``listener_id`` or
|
||||||
``loadbalancer_id`` must be specified.
|
``loadbalancer_id`` must be specified. The listener has some restrictions,
|
||||||
|
See :ref:`valid_protocol`.
|
||||||
in: body
|
in: body
|
||||||
required: false
|
required: false
|
||||||
type: uuid
|
type: uuid
|
||||||
|
|
|
@ -507,3 +507,55 @@ provisioning status once the asynchronus operation completes.
|
||||||
|
|
||||||
An entity in ``ERROR`` has failed provisioning. The entity may be deleted and
|
An entity in ``ERROR`` has failed provisioning. The entity may be deleted and
|
||||||
recreated.
|
recreated.
|
||||||
|
|
||||||
|
|
||||||
|
.. _valid_protocol:
|
||||||
|
|
||||||
|
Protocol Combinations
|
||||||
|
=====================
|
||||||
|
|
||||||
|
The listener and pool can be associated through the listener's
|
||||||
|
``default_pool_id`` or l7policy's ``redirect_pool_id``. Both listener and pool
|
||||||
|
must set the protocol parameter. But the association between the listener and
|
||||||
|
the pool isn't arbitrarily and has some constraints at the protocol aspect.
|
||||||
|
|
||||||
|
Valid protocol combinations
|
||||||
|
---------------------------
|
||||||
|
|
||||||
|
.. |1| unicode:: U+2002 .. nut ( )
|
||||||
|
.. |2| unicode:: U+2003 .. mutton ( )
|
||||||
|
.. |listener| replace:: |2| |2| Listener
|
||||||
|
.. |1Y| replace:: |1| Y
|
||||||
|
.. |1N| replace:: |1| N
|
||||||
|
.. |2Y| replace:: |2| Y
|
||||||
|
.. |2N| replace:: |2| N
|
||||||
|
.. |8Y| replace:: |2| |2| |2| |2| Y
|
||||||
|
.. |8N| replace:: |2| |2| |2| |2| N
|
||||||
|
|
||||||
|
+-------------+-------+--------+------+-------------------+------+
|
||||||
|
|| |listener| || HTTP || HTTPS || TCP || TERMINATED_HTTPS || UDP |
|
||||||
|
|| Pool || || || || || |
|
||||||
|
+=============+=======+========+======+===================+======+
|
||||||
|
| HTTP | |2Y| | |2N| | |1Y| | |8Y| | |1N| |
|
||||||
|
+-------------+-------+--------+------+-------------------+------+
|
||||||
|
| HTTPS | |2N| | |2Y| | |1Y| | |8N| | |1N| |
|
||||||
|
+-------------+-------+--------+------+-------------------+------+
|
||||||
|
| PROXY | |2Y| | |2Y| | |1Y| | |8Y| | |1N| |
|
||||||
|
+-------------+-------+--------+------+-------------------+------+
|
||||||
|
| TCP | |2N| | |2Y| | |1Y| | |8N| | |1N| |
|
||||||
|
+-------------+-------+--------+------+-------------------+------+
|
||||||
|
| UDP | |2N| | |2N| | |1N| | |8N| | |1Y| |
|
||||||
|
+-------------+-------+--------+------+-------------------+------+
|
||||||
|
|
||||||
|
"Y" means the combination is valid and "N" means invalid.
|
||||||
|
|
||||||
|
The HTTPS protocol is HTTPS pass-through. For most providers, this is treated
|
||||||
|
as a TCP protocol. Some advanced providers may support HTTPS session
|
||||||
|
persistence features by using the session ID. The Amphora provider treats
|
||||||
|
HTTPS as a TCP flow, but currently does not support HTTPS session persistence
|
||||||
|
using the session ID.
|
||||||
|
|
||||||
|
The pool protocol of PROXY will use the listener protocol as the pool protocol
|
||||||
|
but will wrap that protocol in the proxy protocol. In the case of listener
|
||||||
|
protocol TERMINATED_HTTPS, a pool protocol of PROXY will be HTTP wrapped in the
|
||||||
|
proxy protocol.
|
||||||
|
|
|
@ -23,6 +23,7 @@ from octavia.common import data_models
|
||||||
from octavia.common import exceptions
|
from octavia.common import exceptions
|
||||||
from octavia.common import policy
|
from octavia.common import policy
|
||||||
from octavia.db import repositories
|
from octavia.db import repositories
|
||||||
|
from octavia.i18n import _
|
||||||
|
|
||||||
CONF = cfg.CONF
|
CONF = cfg.CONF
|
||||||
LOG = logging.getLogger(__name__)
|
LOG = logging.getLogger(__name__)
|
||||||
|
@ -196,3 +197,15 @@ class BaseController(rest.RestController):
|
||||||
attrs = [attr for attr in dir(obj) if not callable(
|
attrs = [attr for attr in dir(obj) if not callable(
|
||||||
getattr(obj, attr)) and not attr.startswith("_")]
|
getattr(obj, attr)) and not attr.startswith("_")]
|
||||||
return attrs
|
return attrs
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _validate_protocol(listener_protocol, pool_protocol):
|
||||||
|
proto_map = constants.VALID_LISTENER_POOL_PROTOCOL_MAP
|
||||||
|
for valid_pool_proto in proto_map[listener_protocol]:
|
||||||
|
if pool_protocol == valid_pool_proto:
|
||||||
|
return
|
||||||
|
detail = _("The pool protocol '%(pool_protocol)s' is invalid while "
|
||||||
|
"the listener protocol is '%(listener_protocol)s'.") % {
|
||||||
|
"pool_protocol": pool_protocol,
|
||||||
|
"listener_protocol": listener_protocol}
|
||||||
|
raise exceptions.ValidationException(detail=detail)
|
||||||
|
|
|
@ -144,16 +144,17 @@ class L7PolicyController(base.BaseController):
|
||||||
"""Creates a l7policy on a listener."""
|
"""Creates a l7policy on a listener."""
|
||||||
l7policy = l7policy_.l7policy
|
l7policy = l7policy_.l7policy
|
||||||
context = pecan.request.context.get('octavia_context')
|
context = pecan.request.context.get('octavia_context')
|
||||||
# Make sure any pool specified by redirect_pool_id exists
|
|
||||||
if l7policy.redirect_pool_id:
|
|
||||||
self._get_db_pool(
|
|
||||||
context.session, l7policy.redirect_pool_id)
|
|
||||||
# Verify the parent listener exists
|
# Verify the parent listener exists
|
||||||
listener_id = l7policy.listener_id
|
listener_id = l7policy.listener_id
|
||||||
listener = self._get_db_listener(
|
listener = self._get_db_listener(
|
||||||
context.session, listener_id)
|
context.session, listener_id)
|
||||||
load_balancer_id = listener.load_balancer_id
|
load_balancer_id = listener.load_balancer_id
|
||||||
l7policy.project_id = listener.project_id
|
l7policy.project_id = listener.project_id
|
||||||
|
# Make sure any pool specified by redirect_pool_id exists
|
||||||
|
if l7policy.redirect_pool_id:
|
||||||
|
db_pool = self._get_db_pool(
|
||||||
|
context.session, l7policy.redirect_pool_id)
|
||||||
|
self._validate_protocol(listener.protocol, db_pool.protocol)
|
||||||
|
|
||||||
self._auth_validate_action(context, l7policy.project_id,
|
self._auth_validate_action(context, l7policy.project_id,
|
||||||
constants.RBAC_POST)
|
constants.RBAC_POST)
|
||||||
|
@ -216,11 +217,15 @@ class L7PolicyController(base.BaseController):
|
||||||
l7policy_dict[attr] = l7policy_dict.pop(val)
|
l7policy_dict[attr] = l7policy_dict.pop(val)
|
||||||
sanitized_l7policy = l7policy_types.L7PolicyPUT(**l7policy_dict)
|
sanitized_l7policy = l7policy_types.L7PolicyPUT(**l7policy_dict)
|
||||||
context = pecan.request.context.get('octavia_context')
|
context = pecan.request.context.get('octavia_context')
|
||||||
|
|
||||||
|
db_l7policy = self._get_db_l7policy(context.session, id)
|
||||||
|
listener = self._get_db_listener(
|
||||||
|
context.session, db_l7policy.listener_id)
|
||||||
# Make sure any specified redirect_pool_id exists
|
# Make sure any specified redirect_pool_id exists
|
||||||
if l7policy_dict.get('redirect_pool_id'):
|
if l7policy_dict.get('redirect_pool_id'):
|
||||||
self._get_db_pool(
|
db_pool = self._get_db_pool(
|
||||||
context.session, l7policy_dict['redirect_pool_id'])
|
context.session, l7policy_dict['redirect_pool_id'])
|
||||||
db_l7policy = self._get_db_l7policy(context.session, id)
|
self._validate_protocol(listener.protocol, db_pool.protocol)
|
||||||
load_balancer_id, listener_id = self._get_listener_and_loadbalancer_id(
|
load_balancer_id, listener_id = self._get_listener_and_loadbalancer_id(
|
||||||
db_l7policy)
|
db_l7policy)
|
||||||
|
|
||||||
|
|
|
@ -121,13 +121,14 @@ class ListenersController(base.BaseController):
|
||||||
raise exceptions.ImmutableObject(resource=db_lb._name(),
|
raise exceptions.ImmutableObject(resource=db_lb._name(),
|
||||||
id=lb_id)
|
id=lb_id)
|
||||||
|
|
||||||
def _validate_pool(self, session, lb_id, pool_id):
|
def _validate_pool(self, session, lb_id, pool_id, listener_protocol):
|
||||||
"""Validate pool given exists on same load balancer as listener."""
|
"""Validate pool given exists on same load balancer as listener."""
|
||||||
db_pool = self.repositories.pool.get(
|
db_pool = self.repositories.pool.get(
|
||||||
session, load_balancer_id=lb_id, id=pool_id)
|
session, load_balancer_id=lb_id, id=pool_id)
|
||||||
if not db_pool:
|
if not db_pool:
|
||||||
raise exceptions.NotFound(
|
raise exceptions.NotFound(
|
||||||
resource=data_models.Pool._name(), id=pool_id)
|
resource=data_models.Pool._name(), id=pool_id)
|
||||||
|
self._validate_protocol(listener_protocol, db_pool.protocol)
|
||||||
|
|
||||||
def _reset_lb_status(self, session, lb_id):
|
def _reset_lb_status(self, session, lb_id):
|
||||||
# Setting LB back to active because this should be a recoverable error
|
# Setting LB back to active because this should be a recoverable error
|
||||||
|
@ -246,7 +247,8 @@ class ListenersController(base.BaseController):
|
||||||
|
|
||||||
if listener_dict['default_pool_id']:
|
if listener_dict['default_pool_id']:
|
||||||
self._validate_pool(context.session, load_balancer_id,
|
self._validate_pool(context.session, load_balancer_id,
|
||||||
listener_dict['default_pool_id'])
|
listener_dict['default_pool_id'],
|
||||||
|
listener.protocol)
|
||||||
|
|
||||||
self._test_lb_and_listener_statuses(
|
self._test_lb_and_listener_statuses(
|
||||||
lock_session, lb_id=load_balancer_id)
|
lock_session, lb_id=load_balancer_id)
|
||||||
|
@ -268,7 +270,8 @@ class ListenersController(base.BaseController):
|
||||||
l7policies = listener_dict.pop('l7policies', l7policies)
|
l7policies = listener_dict.pop('l7policies', l7policies)
|
||||||
if listener_dict.get('default_pool_id'):
|
if listener_dict.get('default_pool_id'):
|
||||||
self._validate_pool(lock_session, load_balancer_id,
|
self._validate_pool(lock_session, load_balancer_id,
|
||||||
listener_dict['default_pool_id'])
|
listener_dict['default_pool_id'],
|
||||||
|
listener_dict['protocol'])
|
||||||
db_listener = self._validate_create_listener(
|
db_listener = self._validate_create_listener(
|
||||||
lock_session, listener_dict)
|
lock_session, listener_dict)
|
||||||
|
|
||||||
|
@ -326,7 +329,7 @@ class ListenersController(base.BaseController):
|
||||||
|
|
||||||
if listener.default_pool_id:
|
if listener.default_pool_id:
|
||||||
self._validate_pool(context.session, load_balancer_id,
|
self._validate_pool(context.session, load_balancer_id,
|
||||||
listener.default_pool_id)
|
listener.default_pool_id, db_listener.protocol)
|
||||||
self._test_lb_and_listener_statuses(context.session, load_balancer_id,
|
self._test_lb_and_listener_statuses(context.session, load_balancer_id,
|
||||||
id=id)
|
id=id)
|
||||||
|
|
||||||
|
|
|
@ -165,6 +165,7 @@ class PoolsController(base.BaseController):
|
||||||
elif pool.listener_id:
|
elif pool.listener_id:
|
||||||
listener = self.repositories.listener.get(
|
listener = self.repositories.listener.get(
|
||||||
context.session, id=pool.listener_id)
|
context.session, id=pool.listener_id)
|
||||||
|
self._validate_protocol(listener.protocol, pool.protocol)
|
||||||
pool.project_id = listener.project_id
|
pool.project_id = listener.project_id
|
||||||
pool.loadbalancer_id = listener.load_balancer_id
|
pool.loadbalancer_id = listener.load_balancer_id
|
||||||
else:
|
else:
|
||||||
|
|
|
@ -64,6 +64,13 @@ PROTOCOL_PROXY = 'PROXY'
|
||||||
SUPPORTED_PROTOCOLS = (PROTOCOL_TCP, PROTOCOL_HTTPS, PROTOCOL_HTTP,
|
SUPPORTED_PROTOCOLS = (PROTOCOL_TCP, PROTOCOL_HTTPS, PROTOCOL_HTTP,
|
||||||
PROTOCOL_TERMINATED_HTTPS, PROTOCOL_PROXY)
|
PROTOCOL_TERMINATED_HTTPS, PROTOCOL_PROXY)
|
||||||
|
|
||||||
|
VALID_LISTENER_POOL_PROTOCOL_MAP = {
|
||||||
|
PROTOCOL_TCP: [PROTOCOL_HTTP, PROTOCOL_HTTPS,
|
||||||
|
PROTOCOL_PROXY, PROTOCOL_TCP],
|
||||||
|
PROTOCOL_HTTP: [PROTOCOL_HTTP, PROTOCOL_PROXY],
|
||||||
|
PROTOCOL_HTTPS: [PROTOCOL_HTTPS, PROTOCOL_PROXY, PROTOCOL_TCP],
|
||||||
|
PROTOCOL_TERMINATED_HTTPS: [PROTOCOL_HTTP, PROTOCOL_PROXY]}
|
||||||
|
|
||||||
# API Integer Ranges
|
# API Integer Ranges
|
||||||
MIN_PORT_NUMBER = 1
|
MIN_PORT_NUMBER = 1
|
||||||
MAX_PORT_NUMBER = 65535
|
MAX_PORT_NUMBER = 65535
|
||||||
|
|
|
@ -12,6 +12,8 @@
|
||||||
# License for the specific language governing permissions and limitations
|
# License for the specific language governing permissions and limitations
|
||||||
# under the License.
|
# under the License.
|
||||||
|
|
||||||
|
from octavia.common import constants
|
||||||
|
|
||||||
|
|
||||||
class MockNovaInterface(object):
|
class MockNovaInterface(object):
|
||||||
net_id = None
|
net_id = None
|
||||||
|
@ -213,3 +215,13 @@ MOCK_NETWORK_IP_AVAILABILITY = {'network_ip_availability': (
|
||||||
'total_ips': MOCK_NETWORK_TOTAL_IPS,
|
'total_ips': MOCK_NETWORK_TOTAL_IPS,
|
||||||
'used_ips': MOCK_NETWORK_USED_IPS,
|
'used_ips': MOCK_NETWORK_USED_IPS,
|
||||||
'subnet_ip_availability': MOCK_SUBNET_IP_AVAILABILITY})}
|
'subnet_ip_availability': MOCK_SUBNET_IP_AVAILABILITY})}
|
||||||
|
|
||||||
|
INVALID_LISTENER_POOL_PROTOCOL_MAP = {
|
||||||
|
constants.PROTOCOL_HTTP: [constants.PROTOCOL_HTTPS,
|
||||||
|
constants.PROTOCOL_TCP,
|
||||||
|
constants.PROTOCOL_TERMINATED_HTTPS],
|
||||||
|
constants.PROTOCOL_HTTPS: [constants.PROTOCOL_HTTP,
|
||||||
|
constants.PROTOCOL_TERMINATED_HTTPS],
|
||||||
|
constants.PROTOCOL_TCP: [constants.PROTOCOL_TERMINATED_HTTPS],
|
||||||
|
constants.PROTOCOL_TERMINATED_HTTPS: [constants.PROTOCOL_HTTPS,
|
||||||
|
constants.PROTOCOL_TCP]}
|
||||||
|
|
|
@ -304,6 +304,9 @@ class BaseAPITest(base_db_test.OctaviaDBTestBase):
|
||||||
response = self.put(path, body, status=202)
|
response = self.put(path, body, status=202)
|
||||||
return response.json
|
return response.json
|
||||||
|
|
||||||
|
# NOTE: This method should be used cautiously. On load balancers with a
|
||||||
|
# significant amount of children resources, it will update the status for
|
||||||
|
# each and every resource and thus taking a lot of DB time.
|
||||||
def _set_lb_and_children_statuses(self, lb_id, prov_status, op_status,
|
def _set_lb_and_children_statuses(self, lb_id, prov_status, op_status,
|
||||||
autodetect=True):
|
autodetect=True):
|
||||||
self.set_object_status(self.lb_repo, lb_id,
|
self.set_object_status(self.lb_repo, lb_id,
|
||||||
|
@ -373,6 +376,9 @@ class BaseAPITest(base_db_test.OctaviaDBTestBase):
|
||||||
provisioning_status=hm_prov,
|
provisioning_status=hm_prov,
|
||||||
operating_status=op_status)
|
operating_status=op_status)
|
||||||
|
|
||||||
|
# NOTE: This method should be used cautiously. On load balancers with a
|
||||||
|
# significant amount of children resources, it will update the status for
|
||||||
|
# each and every resource and thus taking a lot of DB time.
|
||||||
def set_lb_status(self, lb_id, status=None):
|
def set_lb_status(self, lb_id, status=None):
|
||||||
explicit_status = True if status is not None else False
|
explicit_status = True if status is not None else False
|
||||||
if not explicit_status:
|
if not explicit_status:
|
||||||
|
|
|
@ -21,6 +21,7 @@ from oslo_utils import uuidutils
|
||||||
from octavia.common import constants
|
from octavia.common import constants
|
||||||
import octavia.common.context
|
import octavia.common.context
|
||||||
from octavia.common import data_models
|
from octavia.common import data_models
|
||||||
|
from octavia.tests.common import constants as c_const
|
||||||
from octavia.tests.functional.api.v2 import base
|
from octavia.tests.functional.api.v2 import base
|
||||||
|
|
||||||
|
|
||||||
|
@ -1029,3 +1030,118 @@ class TestL7Policy(base.BaseAPITest):
|
||||||
self.delete(self.L7POLICY_PATH.format(
|
self.delete(self.L7POLICY_PATH.format(
|
||||||
l7policy_id=l7policy.get('id')),
|
l7policy_id=l7policy.get('id')),
|
||||||
status=204)
|
status=204)
|
||||||
|
|
||||||
|
@mock.patch('octavia.common.tls_utils.cert_parser.load_certificate_data')
|
||||||
|
def test_listener_pool_protocol_map_post(self, mock_cert_data):
|
||||||
|
cert = data_models.TLSContainer(certificate='cert')
|
||||||
|
mock_cert_data.return_value = {'sni_certs': [cert]}
|
||||||
|
valid_map = constants.VALID_LISTENER_POOL_PROTOCOL_MAP
|
||||||
|
port = 1
|
||||||
|
l7policy = {'action': constants.L7POLICY_ACTION_REDIRECT_TO_POOL}
|
||||||
|
for listener_proto in valid_map:
|
||||||
|
for pool_proto in valid_map[listener_proto]:
|
||||||
|
port = port + 1
|
||||||
|
opts = {}
|
||||||
|
if listener_proto == constants.PROTOCOL_TERMINATED_HTTPS:
|
||||||
|
opts['sni_container_refs'] = [uuidutils.generate_uuid()]
|
||||||
|
listener = self.create_listener(
|
||||||
|
listener_proto, port, self.lb_id, **opts).get('listener')
|
||||||
|
self.set_object_status(self.lb_repo, self.lb_id)
|
||||||
|
pool = self.create_pool(
|
||||||
|
self.lb_id, pool_proto,
|
||||||
|
constants.LB_ALGORITHM_ROUND_ROBIN).get('pool')
|
||||||
|
|
||||||
|
l7policy['listener_id'] = listener.get('id')
|
||||||
|
l7policy['redirect_pool_id'] = pool.get('id')
|
||||||
|
self.set_object_status(self.lb_repo, self.lb_id)
|
||||||
|
self.post(self.L7POLICIES_PATH,
|
||||||
|
self._build_body(l7policy), status=201)
|
||||||
|
self.set_object_status(self.lb_repo, self.lb_id)
|
||||||
|
|
||||||
|
invalid_map = c_const.INVALID_LISTENER_POOL_PROTOCOL_MAP
|
||||||
|
port = 100
|
||||||
|
for listener_proto in invalid_map:
|
||||||
|
opts = {}
|
||||||
|
if listener_proto == constants.PROTOCOL_TERMINATED_HTTPS:
|
||||||
|
opts['sni_container_refs'] = [uuidutils.generate_uuid()]
|
||||||
|
listener = self.create_listener(
|
||||||
|
listener_proto, port, self.lb_id, **opts).get('listener')
|
||||||
|
self.set_object_status(self.lb_repo, self.lb_id)
|
||||||
|
port = port + 1
|
||||||
|
for pool_proto in invalid_map[listener_proto]:
|
||||||
|
pool = self.create_pool(
|
||||||
|
self.lb_id, pool_proto,
|
||||||
|
constants.LB_ALGORITHM_ROUND_ROBIN).get('pool')
|
||||||
|
self.set_object_status(self.lb_repo, self.lb_id)
|
||||||
|
|
||||||
|
l7policy['listener_id'] = listener.get('id')
|
||||||
|
l7policy['redirect_pool_id'] = pool.get('id')
|
||||||
|
expect_error_msg = ("Validation failure: The pool protocol "
|
||||||
|
"'%s' is invalid while the listener "
|
||||||
|
"protocol is '%s'.") % (pool_proto,
|
||||||
|
listener_proto)
|
||||||
|
res = self.post(self.L7POLICIES_PATH,
|
||||||
|
self._build_body(l7policy), status=400)
|
||||||
|
self.assertEqual(expect_error_msg, res.json['faultstring'])
|
||||||
|
self.assert_correct_status(lb_id=self.lb_id)
|
||||||
|
|
||||||
|
@mock.patch('octavia.common.tls_utils.cert_parser.load_certificate_data')
|
||||||
|
def test_listener_pool_protocol_map_put(self, mock_cert_data):
|
||||||
|
cert = data_models.TLSContainer(certificate='cert')
|
||||||
|
mock_cert_data.return_value = {'sni_certs': [cert]}
|
||||||
|
valid_map = constants.VALID_LISTENER_POOL_PROTOCOL_MAP
|
||||||
|
port = 1
|
||||||
|
new_l7policy = {'action': constants.L7POLICY_ACTION_REDIRECT_TO_POOL}
|
||||||
|
for listener_proto in valid_map:
|
||||||
|
for pool_proto in valid_map[listener_proto]:
|
||||||
|
port = port + 1
|
||||||
|
opts = {}
|
||||||
|
if listener_proto == constants.PROTOCOL_TERMINATED_HTTPS:
|
||||||
|
opts['sni_container_refs'] = [uuidutils.generate_uuid()]
|
||||||
|
listener = self.create_listener(
|
||||||
|
listener_proto, port, self.lb_id, **opts).get('listener')
|
||||||
|
self.set_object_status(self.lb_repo, self.lb_id)
|
||||||
|
pool = self.create_pool(
|
||||||
|
self.lb_id, pool_proto,
|
||||||
|
constants.LB_ALGORITHM_ROUND_ROBIN).get('pool')
|
||||||
|
self.set_object_status(self.lb_repo, self.lb_id)
|
||||||
|
l7policy = self.create_l7policy(
|
||||||
|
listener.get('id'),
|
||||||
|
constants.L7POLICY_ACTION_REJECT).get(self.root_tag)
|
||||||
|
self.set_object_status(self.lb_repo, self.lb_id)
|
||||||
|
new_l7policy['redirect_pool_id'] = pool.get('id')
|
||||||
|
|
||||||
|
self.put(
|
||||||
|
self.L7POLICY_PATH.format(l7policy_id=l7policy.get('id')),
|
||||||
|
self._build_body(new_l7policy), status=200)
|
||||||
|
self.set_object_status(self.lb_repo, self.lb_id)
|
||||||
|
|
||||||
|
invalid_map = c_const.INVALID_LISTENER_POOL_PROTOCOL_MAP
|
||||||
|
port = 100
|
||||||
|
for listener_proto in invalid_map:
|
||||||
|
opts = {}
|
||||||
|
if listener_proto == constants.PROTOCOL_TERMINATED_HTTPS:
|
||||||
|
opts['sni_container_refs'] = [uuidutils.generate_uuid()]
|
||||||
|
listener = self.create_listener(
|
||||||
|
listener_proto, port, self.lb_id, **opts).get('listener')
|
||||||
|
self.set_object_status(self.lb_repo, self.lb_id)
|
||||||
|
port = port + 1
|
||||||
|
for pool_proto in invalid_map[listener_proto]:
|
||||||
|
pool = self.create_pool(
|
||||||
|
self.lb_id, pool_proto,
|
||||||
|
constants.LB_ALGORITHM_ROUND_ROBIN).get('pool')
|
||||||
|
self.set_object_status(self.lb_repo, self.lb_id)
|
||||||
|
l7policy = self.create_l7policy(
|
||||||
|
listener.get('id'),
|
||||||
|
constants.L7POLICY_ACTION_REJECT).get(self.root_tag)
|
||||||
|
self.set_object_status(self.lb_repo, self.lb_id)
|
||||||
|
new_l7policy['redirect_pool_id'] = pool.get('id')
|
||||||
|
expect_error_msg = ("Validation failure: The pool protocol "
|
||||||
|
"'%s' is invalid while the listener "
|
||||||
|
"protocol is '%s'.") % (pool_proto,
|
||||||
|
listener_proto)
|
||||||
|
res = self.put(self.L7POLICY_PATH.format(
|
||||||
|
l7policy_id=l7policy.get('id')),
|
||||||
|
self._build_body(new_l7policy), status=400)
|
||||||
|
self.assertEqual(expect_error_msg, res.json['faultstring'])
|
||||||
|
self.assert_correct_status(lb_id=self.lb_id)
|
||||||
|
|
|
@ -23,6 +23,7 @@ from oslo_utils import uuidutils
|
||||||
from octavia.common import constants
|
from octavia.common import constants
|
||||||
import octavia.common.context
|
import octavia.common.context
|
||||||
from octavia.common import data_models
|
from octavia.common import data_models
|
||||||
|
from octavia.tests.common import constants as c_const
|
||||||
from octavia.tests.functional.api.v2 import base
|
from octavia.tests.functional.api.v2 import base
|
||||||
|
|
||||||
|
|
||||||
|
@ -1350,3 +1351,101 @@ class TestListener(base.BaseAPITest):
|
||||||
listener_id=li['id'] + "/stats"), status=403)
|
listener_id=li['id'] + "/stats"), status=403)
|
||||||
self.conf.config(group='api_settings', auth_strategy=auth_strategy)
|
self.conf.config(group='api_settings', auth_strategy=auth_strategy)
|
||||||
self.assertEqual(self.NOT_AUTHORIZED_BODY, res.json)
|
self.assertEqual(self.NOT_AUTHORIZED_BODY, res.json)
|
||||||
|
|
||||||
|
@mock.patch('octavia.common.tls_utils.cert_parser.load_certificate_data')
|
||||||
|
def test_listener_pool_protocol_map_post(self, mock_cert_data):
|
||||||
|
cert = data_models.TLSContainer(certificate='cert')
|
||||||
|
mock_cert_data.return_value = {'sni_certs': [cert]}
|
||||||
|
valid_map = constants.VALID_LISTENER_POOL_PROTOCOL_MAP
|
||||||
|
port = 1
|
||||||
|
for listener_proto in valid_map:
|
||||||
|
for pool_proto in valid_map[listener_proto]:
|
||||||
|
port = port + 1
|
||||||
|
pool = self.create_pool(
|
||||||
|
self.lb_id, pool_proto,
|
||||||
|
constants.LB_ALGORITHM_ROUND_ROBIN).get('pool')
|
||||||
|
self.set_object_status(self.lb_repo, self.lb_id)
|
||||||
|
listener = {'protocol': listener_proto,
|
||||||
|
'protocol_port': port,
|
||||||
|
'loadbalancer_id': self.lb_id,
|
||||||
|
'default_pool_id': pool.get('id')}
|
||||||
|
if listener_proto == constants.PROTOCOL_TERMINATED_HTTPS:
|
||||||
|
listener.update(
|
||||||
|
{'sni_container_refs': [uuidutils.generate_uuid()]})
|
||||||
|
body = self._build_body(listener)
|
||||||
|
self.post(self.LISTENERS_PATH, body, status=201)
|
||||||
|
self.set_object_status(self.lb_repo, self.lb_id)
|
||||||
|
|
||||||
|
invalid_map = c_const.INVALID_LISTENER_POOL_PROTOCOL_MAP
|
||||||
|
port = 1
|
||||||
|
for listener_proto in invalid_map:
|
||||||
|
for pool_proto in invalid_map[listener_proto]:
|
||||||
|
port = port + 1
|
||||||
|
pool = self.create_pool(
|
||||||
|
self.lb_id, pool_proto,
|
||||||
|
constants.LB_ALGORITHM_ROUND_ROBIN).get('pool')
|
||||||
|
self.set_object_status(self.lb_repo, self.lb_id)
|
||||||
|
expect_error_msg = ("Validation failure: The pool protocol "
|
||||||
|
"'%s' is invalid while the listener "
|
||||||
|
"protocol is '%s'.") % (pool_proto,
|
||||||
|
listener_proto)
|
||||||
|
listener = {'protocol': listener_proto,
|
||||||
|
'protocol_port': port,
|
||||||
|
'loadbalancer_id': self.lb_id,
|
||||||
|
'default_pool_id': pool.get('id')}
|
||||||
|
body = self._build_body(listener)
|
||||||
|
res = self.post(self.LISTENERS_PATH, body,
|
||||||
|
status=400, expect_errors=True)
|
||||||
|
self.assertEqual(expect_error_msg, res.json['faultstring'])
|
||||||
|
self.assert_correct_status(lb_id=self.lb_id)
|
||||||
|
|
||||||
|
@mock.patch('octavia.common.tls_utils.cert_parser.load_certificate_data')
|
||||||
|
def test_listener_pool_protocol_map_put(self, mock_cert_data):
|
||||||
|
cert = data_models.TLSContainer(certificate='cert')
|
||||||
|
mock_cert_data.return_value = {'sni_certs': [cert]}
|
||||||
|
valid_map = constants.VALID_LISTENER_POOL_PROTOCOL_MAP
|
||||||
|
port = 1
|
||||||
|
for listener_proto in valid_map:
|
||||||
|
for pool_proto in valid_map[listener_proto]:
|
||||||
|
port = port + 1
|
||||||
|
pool = self.create_pool(
|
||||||
|
self.lb_id, pool_proto,
|
||||||
|
constants.LB_ALGORITHM_ROUND_ROBIN).get('pool')
|
||||||
|
self.set_object_status(self.lb_repo, self.lb_id)
|
||||||
|
opts = {}
|
||||||
|
if listener_proto == constants.PROTOCOL_TERMINATED_HTTPS:
|
||||||
|
opts['sni_container_refs'] = [uuidutils.generate_uuid()]
|
||||||
|
listener = self.create_listener(
|
||||||
|
listener_proto, port, self.lb_id, **opts).get('listener')
|
||||||
|
self.set_object_status(self.lb_repo, self.lb_id)
|
||||||
|
new_listener = {'default_pool_id': pool.get('id')}
|
||||||
|
res = self.put(
|
||||||
|
self.LISTENER_PATH.format(listener_id=listener.get('id')),
|
||||||
|
self._build_body(new_listener), status=200)
|
||||||
|
self.set_object_status(self.lb_repo, self.lb_id)
|
||||||
|
|
||||||
|
invalid_map = c_const.INVALID_LISTENER_POOL_PROTOCOL_MAP
|
||||||
|
port = 100
|
||||||
|
for listener_proto in invalid_map:
|
||||||
|
opts = {}
|
||||||
|
if listener_proto == constants.PROTOCOL_TERMINATED_HTTPS:
|
||||||
|
opts['sni_container_refs'] = [uuidutils.generate_uuid()]
|
||||||
|
listener = self.create_listener(
|
||||||
|
listener_proto, port, self.lb_id, **opts).get('listener')
|
||||||
|
self.set_object_status(self.lb_repo, self.lb_id)
|
||||||
|
port = port + 1
|
||||||
|
for pool_proto in invalid_map[listener_proto]:
|
||||||
|
expect_error_msg = ("Validation failure: The pool protocol "
|
||||||
|
"'%s' is invalid while the listener "
|
||||||
|
"protocol is '%s'.") % (pool_proto,
|
||||||
|
listener_proto)
|
||||||
|
pool = self.create_pool(
|
||||||
|
self.lb_id, pool_proto,
|
||||||
|
constants.LB_ALGORITHM_ROUND_ROBIN).get('pool')
|
||||||
|
self.set_object_status(self.lb_repo, self.lb_id)
|
||||||
|
new_listener = {'default_pool_id': pool.get('id')}
|
||||||
|
res = self.put(
|
||||||
|
self.LISTENER_PATH.format(listener_id=listener.get('id')),
|
||||||
|
self._build_body(new_listener), status=400)
|
||||||
|
self.assertEqual(expect_error_msg, res.json['faultstring'])
|
||||||
|
self.assert_correct_status(lb_id=self.lb_id)
|
||||||
|
|
|
@ -2473,7 +2473,7 @@ class TestLoadBalancerGraph(base.BaseAPITest):
|
||||||
expected_members=[expected_member],
|
expected_members=[expected_member],
|
||||||
create_hm=create_hm,
|
create_hm=create_hm,
|
||||||
expected_hm=expected_hm,
|
expected_hm=expected_hm,
|
||||||
protocol=constants.PROTOCOL_TCP)
|
protocol=constants.PROTOCOL_HTTP)
|
||||||
create_sni_containers, expected_sni_containers = (
|
create_sni_containers, expected_sni_containers = (
|
||||||
self._get_sni_container_bodies())
|
self._get_sni_container_bodies())
|
||||||
create_l7rules, expected_l7rules = self._get_l7rules_bodies()
|
create_l7rules, expected_l7rules = self._get_l7rules_bodies()
|
||||||
|
|
|
@ -21,6 +21,7 @@ from oslo_utils import uuidutils
|
||||||
from octavia.common import constants
|
from octavia.common import constants
|
||||||
import octavia.common.context
|
import octavia.common.context
|
||||||
from octavia.common import data_models
|
from octavia.common import data_models
|
||||||
|
from octavia.tests.common import constants as c_const
|
||||||
from octavia.tests.functional.api.v2 import base
|
from octavia.tests.functional.api.v2 import base
|
||||||
|
|
||||||
|
|
||||||
|
@ -1411,3 +1412,58 @@ class TestPool(base.BaseAPITest):
|
||||||
self.set_lb_status(self.lb_id, status=constants.DELETED)
|
self.set_lb_status(self.lb_id, status=constants.DELETED)
|
||||||
self.delete(self.POOL_PATH.format(pool_id=api_pool.get('id')),
|
self.delete(self.POOL_PATH.format(pool_id=api_pool.get('id')),
|
||||||
status=204)
|
status=204)
|
||||||
|
|
||||||
|
@mock.patch('octavia.common.tls_utils.cert_parser.load_certificate_data')
|
||||||
|
def test_valid_listener_pool_protocol(self, mock_cert_data):
|
||||||
|
cert = data_models.TLSContainer(certificate='cert')
|
||||||
|
lb_pool = {
|
||||||
|
'lb_algorithm': constants.LB_ALGORITHM_ROUND_ROBIN,
|
||||||
|
'project_id': self.project_id}
|
||||||
|
mock_cert_data.return_value = {'sni_certs': [cert]}
|
||||||
|
valid_map = constants.VALID_LISTENER_POOL_PROTOCOL_MAP
|
||||||
|
port = 1
|
||||||
|
for listener_proto in valid_map:
|
||||||
|
for pool_proto in valid_map[listener_proto]:
|
||||||
|
port = port + 1
|
||||||
|
opts = {}
|
||||||
|
if listener_proto == constants.PROTOCOL_TERMINATED_HTTPS:
|
||||||
|
opts['sni_container_refs'] = [uuidutils.generate_uuid()]
|
||||||
|
listener = self.create_listener(
|
||||||
|
listener_proto, port, self.lb_id, **opts).get('listener')
|
||||||
|
self.set_object_status(self.lb_repo, self.lb_id)
|
||||||
|
if listener['default_pool_id'] is None:
|
||||||
|
lb_pool['protocol'] = pool_proto
|
||||||
|
lb_pool['listener_id'] = listener.get('id')
|
||||||
|
self.post(self.POOLS_PATH, self._build_body(lb_pool),
|
||||||
|
status=201)
|
||||||
|
self.set_object_status(self.lb_repo, self.lb_id)
|
||||||
|
|
||||||
|
@mock.patch('octavia.common.tls_utils.cert_parser.load_certificate_data')
|
||||||
|
def test_invalid_listener_pool_protocol_map(self, mock_cert_data):
|
||||||
|
cert = data_models.TLSContainer(certificate='cert')
|
||||||
|
lb_pool = {
|
||||||
|
'lb_algorithm': constants.LB_ALGORITHM_ROUND_ROBIN,
|
||||||
|
'project_id': self.project_id}
|
||||||
|
mock_cert_data.return_value = {'sni_certs': [cert]}
|
||||||
|
invalid_map = c_const.INVALID_LISTENER_POOL_PROTOCOL_MAP
|
||||||
|
port = 1
|
||||||
|
for listener_proto in invalid_map:
|
||||||
|
opts = {}
|
||||||
|
if listener_proto == constants.PROTOCOL_TERMINATED_HTTPS:
|
||||||
|
opts['sni_container_refs'] = [uuidutils.generate_uuid()]
|
||||||
|
listener = self.create_listener(
|
||||||
|
listener_proto, port, self.lb_id, **opts).get('listener')
|
||||||
|
self.set_object_status(self.lb_repo, self.lb_id)
|
||||||
|
port = port + 1
|
||||||
|
for pool_proto in invalid_map[listener_proto]:
|
||||||
|
expect_error_msg = ("Validation failure: The pool protocol "
|
||||||
|
"'%s' is invalid while the listener "
|
||||||
|
"protocol is '%s'.") % (pool_proto,
|
||||||
|
listener_proto)
|
||||||
|
if listener['default_pool_id'] is None:
|
||||||
|
lb_pool['protocol'] = pool_proto
|
||||||
|
lb_pool['listener_id'] = listener.get('id')
|
||||||
|
res = self.post(self.POOLS_PATH, self._build_body(lb_pool),
|
||||||
|
status=400, expect_errors=True)
|
||||||
|
self.assertEqual(expect_error_msg, res.json['faultstring'])
|
||||||
|
self.assert_correct_status(lb_id=self.lb_id)
|
||||||
|
|
|
@ -0,0 +1,5 @@
|
||||||
|
---
|
||||||
|
fixes:
|
||||||
|
- |
|
||||||
|
Add listener and pool protocol validation. The pool and listener can't be
|
||||||
|
combined arbitrarily. We need some constraints on the protocol side.
|
Loading…
Reference in New Issue