Remove v1 API and associated code

Includes some updates to docs and configs and related files to remove
references to neutron-lbaas. Also remove handlers.

Change-Id: I3082962841d3b645f3cbd1a6b41fc7fb28dcf7e6
This commit is contained in:
Adam Harwell 2019-05-01 14:12:22 -07:00
parent 1a87298ac7
commit 29d4340e9f
110 changed files with 85 additions and 16736 deletions

View File

@ -1,13 +1,5 @@
{
"versions": [{
"status": "DEPRECATED",
"updated": "2014-12-11T00:00:00Z",
"id": "v1",
"links": [{
"href": "http://10.21.21.53/load-balancer/v1",
"rel": "self"
}]
}, {
"status": "SUPPORTED",
"updated": "2016-12-11T00:00:00Z",
"id": "v2.0",

View File

@ -15,15 +15,10 @@ Supported API version
None
Deprecated API version
:doc:`v1/octaviaapi`
.. toctree::
:hidden:
v2/index
v1/octaviaapi
Octavia API minor releases are additive to the API major revision and share
the same URL path. Minor revision changes to the API are called out in the API

File diff suppressed because it is too large Load Diff

View File

@ -270,15 +270,6 @@ function octavia_configure {
iniset $OCTAVIA_CONF oslo_messaging rpc_thread_pool_size 2
iniset $OCTAVIA_CONF oslo_messaging topic octavia_prov
# TODO(nmagnezi): Remove this when neutron-lbaas gets deprecated
# Setting neutron request_poll_timeout
iniset $NEUTRON_CONF octavia request_poll_timeout 3000
if [[ "$WSGI_MODE" == "uwsgi" ]]; then
iniadd $NEUTRON_CONF octavia base_url "$OCTAVIA_PROTOCOL://$SERVICE_HOST/$OCTAVIA_SERVICE_TYPE"
else
iniadd $NEUTRON_CONF octavia base_url "$OCTAVIA_PROTOCOL://$SERVICE_HOST:$OCTAVIA_PORT/"
fi
# Uncomment other default options
iniuncomment $OCTAVIA_CONF haproxy_amphora base_path
iniuncomment $OCTAVIA_CONF haproxy_amphora base_cert_dir

View File

@ -90,11 +90,7 @@ GITREPO["octavia-lib"]=${OCTAVIA_LIB_REPO:-${GIT_BASE}/openstack/octavia-lib.git
GITBRANCH["octavia-lib"]=${OCTAVIA_LIB_BRANCH:-master}
GITDIR["octavia-lib"]=$DEST/octavia-lib
NEUTRON_LBAAS_DIR=$DEST/neutron-lbaas
NEUTRON_LBAAS_CONF=$NEUTRON_CONF_DIR/neutron_lbaas.conf
OCTAVIA_SERVICE_PROVIDER=${OCTAVIA_SERVICE_PROVIDER:-"LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver:default"}
NEUTRON_ANY=${NEUTRON_ANY:-"q-svc neutron-api"}
LBAAS_V2=${LBAAS_V2:-"neutron-lbaasv2"}
# HA-deployment related settings
OCTAVIA_USE_PREGENERATED_SSH_KEY=${OCTAVIA_USE_PREGENERATED_SSH_KEY:-"False"}

View File

@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
#
# Tempest documentation build configuration file, created by
# Octavia documentation build configuration file, created by
# sphinx-quickstart on Tue May 21 17:43:32 2013.
#
# This file is execfile()d with the current directory set to its containing

View File

@ -72,8 +72,7 @@ Deployment
2. Copy ``devstack/contrib/new-octavia-devstack.sh`` from this source
repository onto that host.
3. Run new-octavia-devstack.sh as root.
4. Deploy loadbalancers, listeners, etc. as you would with any Neutron LBaaS v2
enabled cloud.
4. Deploy loadbalancers, listeners, etc.
Running Octavia in production
@ -125,7 +124,7 @@ For the purposes of this guide, we will therefore assume the following core
components have already been set up for your production OpenStack environment:
* Nova
* Neutron (with Neutron LBaaS v2)
* Neutron
* Glance
* Barbican (if TLS offloading functionality is enabled)
* Keystone
@ -138,11 +137,8 @@ Production Deployment Walkthrough
Create Octavia User
___________________
By default Octavia will use the 'neutron' user for keystone authentication, and
the admin user for interactions with all other services. However, it doesn't
actually share neutron's database or otherwise access Neutron outside of
Neutron's API, so a dedicated 'octavia' keystone user should generally be
created for Octavia to use.
By default Octavia will use the 'octavia' user for keystone authentication, and
the admin user for interactions with all other services.
You must:
@ -225,14 +221,8 @@ Running multiple instances of the individual Octavia controller components on
separate physical hosts is recommended in order to provide scalability and
availability of the controller software.
One important security note: In 0.9 of Octavia, the Octavia API is designed to
be consumed only by the Neutron-LBaaS v2 Octavia driver. As such, there is
presently no authentication required to use the Octavia API, and therefore the
Octavia API should only be accessible on trusted network segments
(specifically, the segment that runs the neutron-services daemons.)
The Octavia controller presently consists of several components which may be
split across several physical machines. For the 0.9 release of Octavia, the
split across several physical machines. For the 4.0 release of Octavia, the
important (and potentially separable) components are the controller worker,
housekeeper, health manager and API controller. Please see the component
diagrams elsewhere in this repository's documentation for detailed descriptions
@ -253,7 +243,7 @@ components need access to outside resources:
| housekeeper | Yes | Yes | No |
+-------------------+------------+----------+----------------+
In addition to talking to each other via OSLO messaging, various controller
In addition to talking to each other via Oslo messaging, various controller
components must also communicate with other OpenStack components, like nova,
neutron, barbican, etc. via their APIs.
@ -438,46 +428,22 @@ You must:
* Make sure each Octavia controller component is started appropriately.
Configuring Neutron LBaaS
_________________________
This is fairly straightforward. Neutron LBaaS needs to be directed to use the
Octavia service provider. There should be a line like the following in
``/etc/neutron/neutron_lbaas.conf`` file's ``[service providers]`` section:
::
service_provider = LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver:default
In addition to the above you must add the octavia API ``base_url`` to the
``[octavia]`` section of ``/etc/neutron/neutron.conf``. For example:
::
[octavia]
base_url=http://127.0.0.1:9876
You must:
* Update ``/etc/neutron/neutron_lbaas.conf`` as described above.
* Add the octavia API URL to ``/etc/neutron/neutron.conf``.
Install Neutron-LBaaS v2 extension in Horizon
Install Octavia extension in Horizon
_____________________________________________
This isn't strictly necessary for all cloud installations, however, if yours
makes use of the Horizon GUI interface for tenants, it is probably also a good
idea to make sure that it is configured with the Neutron-LBaaS v2 extension.
idea to make sure that it is configured with the Octavia extension.
You may:
* Install the neutron-lbaasv2 GUI extension in Horizon
* Install the octavia GUI extension in Horizon
Test deployment
_______________
If all of the above instructions have been followed, it should now be possible
to deploy load balancing services using the python neutronclient CLI,
communicating with the neutron-lbaas v2 API.
to deploy load balancing services using the OpenStack CLI,
communicating with the Octavia v2 API.
Example:

View File

@ -1,672 +0,0 @@
..
Copyright (c) 2016 IBM
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
===============================================================
Basic Load Balancing Cookbook Using Neutron Client (deprecated)
===============================================================
.. warning:: The neutron client used in this document is deprecated. We
strongly encourage you to use the OpenStack Client and Octavia
OpenStack Client plugin instead. This document is being maintained
for deployments still using neutron-lbaas and the neutron client.
Introduction
============
This document contains several examples of using basic load balancing services
as a tenant or "regular" cloud user.
For the purposes of this guide we assume that the neutron and barbican
command-line interfaces are going to be used to configure all features of
Neutron LBaaS with an Octavia back-end. In order to keep these examples short,
we also assume that tasks not directly associated with deploying load balancing
services have already been accomplished. This might include such things as
deploying and configuring web servers, setting up Neutron networks, obtaining
TLS certificates from a trusted provider, and so on. A description of the
starting conditions is given in each example below.
Please also note that this guide assumes you are familiar with the specific
load balancer terminology defined in the :doc:`../../reference/glossary`. For a
description of load balancing itself and the Octavia project, please see:
:doc:`../../reference/introduction`.
Examples
========
Deploy a basic HTTP load balancer
---------------------------------
While this is technically the simplest complete load balancing solution that
can be deployed, we recommend deploying HTTP load balancers with a health
monitor to ensure back-end member availability. See
:ref:`basic-lb-with-hm-neutron` below.
**Scenario description**:
* Back-end servers 192.0.2.10 and 192.0.2.11 on subnet *private-subnet* have
been configured with an HTTP application on TCP port 80.
* Subnet *public-subnet* is a shared external subnet created by the cloud
operator which is reachable from the internet.
* We want to configure a basic load balancer that is accessible from the
internet, which distributes web requests to the back-end servers.
**Solution**:
1. Create load balancer *lb1* on subnet *public-subnet*.
2. Create listener *listener1*.
3. Create pool *pool1* as *listener1*'s default pool.
4. Add members 192.0.2.10 and 192.0.2.11 on *private-subnet* to *pool1*.
**CLI commands**:
::
neutron lbaas-loadbalancer-create --name lb1 public-subnet
# Re-run the following until lb1 shows ACTIVE and ONLINE statuses:
neutron lbaas-loadbalancer-show lb1
neutron lbaas-listener-create --name listener1 --loadbalancer lb1 --protocol HTTP --protocol-port 80
neutron lbaas-pool-create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
neutron lbaas-member-create --subnet private-subnet --address 192.0.2.10 --protocol-port 80 pool1
neutron lbaas-member-create --subnet private-subnet --address 192.0.2.11 --protocol-port 80 pool1
.. _basic-lb-with-hm-neutron:
Deploy a basic HTTP load balancer with a health monitor
-------------------------------------------------------
This is the simplest recommended load balancing solution for HTTP applications.
This solution is appropriate for operators with provider networks that are not
compatible with Neutron floating-ip functionality (such as IPv6 networks).
However, if you need to retain control of the external IP through which a load
balancer is accessible, even if the load balancer needs to be destroyed or
recreated, it may be more appropriate to deploy your basic load balancer using
a floating IP. See :ref:`basic-lb-with-hm-and-fip-neutron` below.
**Scenario description**:
* Back-end servers 192.0.2.10 and 192.0.2.11 on subnet *private-subnet* have
been configured with an HTTP application on TCP port 80.
* These back-end servers have been configured with a health check at the URL
path "/healthcheck". See :ref:`http-heath-monitors-neutron` below.
* Subnet *public-subnet* is a shared external subnet created by the cloud
operator which is reachable from the internet.
* We want to configure a basic load balancer that is accessible from the
internet, which distributes web requests to the back-end servers, and which
checks the "/healthcheck" path to ensure back-end member health.
**Solution**:
1. Create load balancer *lb1* on subnet *public-subnet*.
2. Create listener *listener1*.
3. Create pool *pool1* as *listener1*'s default pool.
4. Create a health monitor on *pool1* which tests the "/healthcheck" path.
5. Add members 192.0.2.10 and 192.0.2.11 on *private-subnet* to *pool1*.
**CLI commands**:
::
neutron lbaas-loadbalancer-create --name lb1 public-subnet
# Re-run the following until lb1 shows ACTIVE and ONLINE statuses:
neutron lbaas-loadbalancer-show lb1
neutron lbaas-listener-create --name listener1 --loadbalancer lb1 --protocol HTTP --protocol-port 80
neutron lbaas-pool-create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
neutron lbaas-healthmonitor-create --delay 5 --max-retries 4 --timeout 10 --type HTTP --url_path /healthcheck --pool pool1
neutron lbaas-member-create --subnet private-subnet --address 192.0.2.10 --protocol-port 80 pool1
neutron lbaas-member-create --subnet private-subnet --address 192.0.2.11 --protocol-port 80 pool1
.. _basic-lb-with-hm-and-fip-neutron:
Deploy a basic HTTP load balancer using a floating IP
-----------------------------------------------------
It can be beneficial to use a floating IP when setting up a load balancer's VIP
in order to ensure you retain control of the IP that gets assigned as the
floating IP in case the load balancer needs to be destroyed, moved, or
recreated.
Note that this is not possible to do with IPv6 load balancers as floating IPs
do not work with IPv6. Further, there is currently a bug in Neutron Distributed
Virtual Routing (DVR) which prevents floating IPs from working correctly when
DVR is in use. See: https://bugs.launchpad.net/neutron/+bug/1583694
**Scenario description**:
* Back-end servers 192.0.2.10 and 192.0.2.11 on subnet *private-subnet* have
been configured with an HTTP application on TCP port 80.
* These back-end servers have been configured with a health check at the URL
path "/healthcheck". See :ref:`http-heath-monitors-neutron` below.
* Neutron network *public* is a shared external network created by the cloud
operator which is reachable from the internet.
* We want to configure a basic load balancer that is accessible from the
internet, which distributes web requests to the back-end servers, and which
checks the "/healthcheck" path to ensure back-end member health. Further, we
want to do this using a floating IP.
**Solution**:
1. Create load balancer *lb1* on subnet *private-subnet*.
2. Create listener *listener1*.
3. Create pool *pool1* as *listener1*'s default pool.
4. Create a health monitor on *pool1* which tests the "/healthcheck" path.
5. Add members 192.0.2.10 and 192.0.2.11 on *private-subnet* to *pool1*.
6. Create a floating IP address on *public-subnet*.
7. Associate this floating IP with the *lb1*'s VIP port.
**CLI commands**:
::
neutron lbaas-loadbalancer-create --name lb1 private-subnet
# Re-run the following until lb1 shows ACTIVE and ONLINE statuses:
neutron lbaas-loadbalancer-show lb1
neutron lbaas-listener-create --name listener1 --loadbalancer lb1 --protocol HTTP --protocol-port 80
neutron lbaas-pool-create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
neutron lbaas-healthmonitor-create --delay 5 --max-retries 4 --timeout 10 --type HTTP --url_path /healthcheck --pool pool1
neutron lbaas-member-create --subnet private-subnet --address 192.0.2.10 --protocol-port 80 pool1
neutron lbaas-member-create --subnet private-subnet --address 192.0.2.11 --protocol-port 80 pool1
neutron floatingip-create public
# The following IDs should be visible in the output of previous commands
neutron floatingip-associate <floating_ip_id> <load_balancer_vip_port_id>
Deploy a basic HTTP load balancer with session persistence
----------------------------------------------------------
**Scenario description**:
* Back-end servers 192.0.2.10 and 192.0.2.11 on subnet *private-subnet* have
been configured with an HTTP application on TCP port 80.
* The application is written such that web clients should always be directed to
the same back-end server throughout their web session, based on an
application cookie inserted by the web application named 'PHPSESSIONID'.
* These back-end servers have been configured with a health check at the URL
path "/healthcheck". See :ref:`http-heath-monitors-neutron` below.
* Subnet *public-subnet* is a shared external subnet created by the cloud
operator which is reachable from the internet.
* We want to configure a basic load balancer that is accessible from the
internet, which distributes web requests to the back-end servers, persists
sessions using the PHPSESSIONID as a key, and which checks the "/healthcheck"
path to ensure back-end member health.
**Solution**:
1. Create load balancer *lb1* on subnet *public-subnet*.
2. Create listener *listener1*.
3. Create pool *pool1* as *listener1*'s default pool which defines session
persistence on the 'PHPSESSIONID' cookie.
4. Create a health monitor on *pool1* which tests the "/healthcheck" path.
5. Add members 192.0.2.10 and 192.0.2.11 on *private-subnet* to *pool1*.
**CLI commands**:
::
neutron lbaas-loadbalancer-create --name lb1 public-subnet
# Re-run the following until lb1 shows ACTIVE and ONLINE statuses:
neutron lbaas-loadbalancer-show lb1
neutron lbaas-listener-create --name listener1 --loadbalancer lb1 --protocol HTTP --protocol-port 80
neutron lbaas-pool-create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP --session-persistence type=APP_COOKIE,cookie_name=PHPSESSIONID
neutron lbaas-healthmonitor-create --delay 5 --max-retries 4 --timeout 10 --type HTTP --url_path /healthcheck --pool pool1
neutron lbaas-member-create --subnet private-subnet --address 192.0.2.10 --protocol-port 80 pool1
neutron lbaas-member-create --subnet private-subnet --address 192.0.2.11 --protocol-port 80 pool1
Deploy a TCP load balancer
--------------------------
This is generally suitable when load balancing a non-HTTP TCP-based service.
**Scenario description**:
* Back-end servers 192.0.2.10 and 192.0.2.11 on subnet *private-subnet* have
been configured with an custom application on TCP port 23456
* Subnet *public-subnet* is a shared external subnet created by the cloud
operator which is reachable from the internet.
* We want to configure a basic load balancer that is accessible from the
internet, which distributes requests to the back-end servers.
* We want to employ a TCP health check to ensure that the back-end servers are
available.
**Solution**:
1. Create load balancer *lb1* on subnet *public-subnet*.
2. Create listener *listener1*.
3. Create pool *pool1* as *listener1*'s default pool.
4. Create a health monitor on *pool1* which probes *pool1*'s members' TCP
service port.
5. Add members 192.0.2.10 and 192.0.2.11 on *private-subnet* to *pool1*.
**CLI commands**:
::
neutron lbaas-loadbalancer-create --name lb1 public-subnet
# Re-run the following until lb1 shows ACTIVE and ONLINE statuses:
neutron lbaas-loadbalancer-show lb1
neutron lbaas-listener-create --name listener1 --loadbalancer lb1 --protocol TCP --protocol-port 23456
neutron lbaas-pool-create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol TCP
neutron lbaas-healthmonitor-create --delay 5 --max-retries 4 --timeout 10 --type TCP --pool pool1
neutron lbaas-member-create --subnet private-subnet --address 192.0.2.10 --protocol-port 80 pool1
neutron lbaas-member-create --subnet private-subnet --address 192.0.2.11 --protocol-port 80 pool1
Deploy a non-terminated HTTPS load balancer
-------------------------------------------
A non-terminated HTTPS load balancer acts effectively like a generic TCP load
balancer: The load balancer will forward the raw TCP traffic from the web
client to the back-end servers without decrypting it. This means that the
back-end servers themselves must be configured to terminate the HTTPS
connection with the web clients, and in turn, the load balancer cannot insert
headers into the HTTP session indicating the client IP address. (That is, to
the back-end server, all web requests will appear to originate from the load
balancer.) Also, advanced load balancer features (like Layer 7 functionality)
cannot be used with non-terminated HTTPS.
**Scenario description**:
* Back-end servers 192.0.2.10 and 192.0.2.11 on subnet *private-subnet* have
been configured with a TLS-encrypted web application on TCP port 443.
* Subnet *public-subnet* is a shared external subnet created by the cloud
operator which is reachable from the internet.
* We want to configure a basic load balancer that is accessible from the
internet, which distributes requests to the back-end servers.
* We want to employ a TCP health check to ensure that the back-end servers are
available.
**Solution**:
1. Create load balancer *lb1* on subnet *public-subnet*.
2. Create listener *listener1*.
3. Create pool *pool1* as *listener1*'s default pool.
4. Create a health monitor on *pool1* which probes *pool1*'s members' TCP
service port.
5. Add members 192.0.2.10 and 192.0.2.11 on *private-subnet* to *pool1*.
**CLI commands**:
::
neutron lbaas-loadbalancer-create --name lb1 public-subnet
# Re-run the following until lb1 shows ACTIVE and ONLINE statuses:
neutron lbaas-loadbalancer-show lb1
neutron lbaas-listener-create --name listener1 --loadbalancer lb1 --protocol HTTPS --protocol-port 443
neutron lbaas-pool-create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTPS
neutron lbaas-healthmonitor-create --delay 5 --max-retries 4 --timeout 10 --type TCP --pool pool1
neutron lbaas-member-create --subnet private-subnet --address 192.0.2.10 --protocol-port 443 pool1
neutron lbaas-member-create --subnet private-subnet --address 192.0.2.11 --protocol-port 443 pool1
.. _basic-tls-terminated-listener-neutron:
Deploy a TLS-terminated HTTPS load balancer
-------------------------------------------
With a TLS-terminated HTTPS load balancer, web clients communicate with the
load balancer over TLS protocols. The load balancer terminates the TLS session
and forwards the decrypted requests to the back-end servers. By terminating the
TLS session on the load balancer, we offload the CPU-intensive encryption work
to the load balancer, and enable the possibility of using advanced load
balancer features, like Layer 7 features and header manipulation.
**Scenario description**:
* Back-end servers 192.0.2.10 and 192.0.2.11 on subnet *private-subnet* have
been configured with regular HTTP application on TCP port 80.
* These back-end servers have been configured with a health check at the URL
path "/healthcheck". See :ref:`http-heath-monitors-neutron` below.
* Subnet *public-subnet* is a shared external subnet created by the cloud
operator which is reachable from the internet.
* A TLS certificate, key, and intermediate certificate chain for
www.example.com have been obtained from an external certificate authority.
These now exist in the files server.crt, server.key, and ca-chain.p7b in the
current directory. The key and certificate are PEM-encoded, and the
intermediate certificate chain is PKCS7 PEM encoded. The key is not encrypted
with a passphrase.
* The *admin* user on this cloud installation has keystone ID *admin_id*
* We want to configure a TLS-terminated HTTPS load balancer that is accessible
from the internet using the key and certificate mentioned above, which
distributes requests to the back-end servers over the non-encrypted HTTP
protocol.
**Solution**:
1. Create barbican *secret* resources for the certificate, key, and
intermediate certificate chain. We will call these *cert1*, *key1*, and
*intermediates1* respectively.
2. Create a *secret container* resource combining all of the above. We will
call this *tls_container1*.
3. Grant the *admin* user access to all the *secret* and *secret container*
barbican resources above.
4. Create load balancer *lb1* on subnet *public-subnet*.
5. Create listener *listener1* as a TERMINATED_HTTPS listener referencing
*tls_container1* as its default TLS container.
6. Create pool *pool1* as *listener1*'s default pool.
7. Add members 192.0.2.10 and 192.0.2.11 on *private-subnet* to *pool1*.
**CLI commands**:
::
openstack secret store --name='cert1' --payload-content-type='text/plain' --payload="$(cat server.crt)"
openstack secret store --name='key1' --payload-content-type='text/plain' --payload="$(cat server.key)"
openstack secret store --name='intermediates1' --payload-content-type='text/plain' --payload="$(cat ca-chain.p7b)"
openstack secret container create --name='tls_container1' --type='certificate' --secret="certificate=$(openstack secret list | awk '/ cert1 / {print $2}')" --secret="private_key=$(openstack secret list | awk '/ key1 / {print $2}')" --secret="intermediates=$(openstack secret list | awk '/ intermediates1 / {print $2}')"
neutron lbaas-loadbalancer-create --name lb1 public-subnet
# Re-run the following until lb1 shows ACTIVE and ONLINE statuses:
neutron lbaas-loadbalancer-show lb1
neutron lbaas-listener-create --loadbalancer lb1 --protocol-port 443 --protocol TERMINATED_HTTPS --name listener1 --default-tls-container=$(openstack secret container list | awk '/ tls_container1 / {print $2}')
neutron lbaas-pool-create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
neutron lbaas-member-create --subnet private-subnet --address 192.0.2.10 --protocol-port 80 pool1
neutron lbaas-member-create --subnet private-subnet --address 192.0.2.11 --protocol-port 80 pool1
Deploy a TLS-terminated HTTPS load balancer with SNI
----------------------------------------------------
This example is exactly like :ref:`basic-tls-terminated-listener-neutron`,
except that we have multiple TLS certificates that we would like to use on
the same listener using Server Name Indication (SNI) technology.
**Scenario description**:
* Back-end servers 192.0.2.10 and 192.0.2.11 on subnet *private-subnet* have
been configured with regular HTTP application on TCP port 80.
* These back-end servers have been configured with a health check at the URL
path "/healthcheck". See :ref:`http-heath-monitors-neutron` below.
* Subnet *public-subnet* is a shared external subnet created by the cloud
operator which is reachable from the internet.
* TLS certificates, keys, and intermediate certificate chains for
www.example.com and www2.example.com have been obtained from an external
certificate authority. These now exist in the files server.crt, server.key,
ca-chain.p7b, server2.crt, server2-encrypted.key, and ca-chain2.p7b in the
current directory. The keys and certificates are PEM-encoded, and the
intermediate certificate chains are PKCS7 PEM encoded.
* The key for www.example.com is not encrypted with a passphrase.
* The key for www2.example.com is encrypted with the passphrase "abc123".
* The *admin* user on this cloud installation has keystone ID *admin_id*
* We want to configure a TLS-terminated HTTPS load balancer that is accessible
from the internet using the keys and certificates mentioned above, which
distributes requests to the back-end servers over the non-encrypted HTTP
protocol.
* If a web client connects that is not SNI capable, we want the load balancer
to respond with the certificate for www.example.com.
**Solution**:
1. Create barbican *secret* resources for the certificates, keys, and
intermediate certificate chains. We will call these *cert1*, *key1*,
*intermediates1*, *cert2*, *key2* and *intermediates2* respectively.
2. Create a barbican *secret* resource *passphrase2* for the passphrase for
*key2*
3. Create *secret container* resources combining the above appropriately. We
will call these *tls_container1* and *tls_container2*.
4. Grant the *admin* user access to all the *secret* and *secret container*
barbican resources above.
5. Create load balancer *lb1* on subnet *public-subnet*.
6. Create listener *listener1* as a TERMINATED_HTTPS listener referencing
*tls_container1* as its default TLS container, and referencing both
*tls_container1* and *tls_container2* using SNI.
7. Create pool *pool1* as *listener1*'s default pool.
8. Add members 192.0.2.10 and 192.0.2.11 on *private-subnet* to *pool1*.
**CLI commands**:
::
openstack secret store --name='cert1' --payload-content-type='text/plain' --payload="$(cat server.crt)"
openstack secret store --name='key1' --payload-content-type='text/plain' --payload="$(cat server.key)"
openstack secret store --name='intermediates1' --payload-content-type='text/plain' --payload="$(cat ca-chain.p7b)"
openstack secret container create --name='tls_container1' --type='certificate' --secret="certificate=$(openstack secret list | awk '/ cert1 / {print $2}')" --secret="private_key=$(openstack secret list | awk '/ key1 / {print $2}')" --secret="intermediates=$(openstack secret list | awk '/ intermediates1 / {print $2}')"
openstack secret store --name='cert2' --payload-content-type='text/plain' --payload="$(cat server2.crt)"
openstack secret store --name='key2' --payload-content-type='text/plain' --payload="$(cat server2-encrypted.key)"
openstack secret store --name='intermediates2' --payload-content-type='text/plain' --payload="$(cat ca-chain2.p7b)"
openstack secret store --name='passphrase2' --payload-content-type='text/plain' --payload="abc123"
openstack secret container create --name='tls_container2' --type='certificate' --secret="certificate=$(openstack secret list | awk '/ cert2 / {print $2}')" --secret="private_key=$(openstack secret list | awk '/ key2 / {print $2}')" --secret="intermediates=$(openstack secret list | awk '/ intermediates2 / {print $2}')" --secret="private_key_passphrase=$(openstack secret list | awk '/ passphrase2 / {print $2}')"
neutron lbaas-loadbalancer-create --name lb1 public-subnet
# Re-run the following until lb1 shows ACTIVE and ONLINE statuses:
neutron lbaas-loadbalancer-show lb1
neutron lbaas-listener-create --loadbalancer lb1 --protocol-port 443 --protocol TERMINATED_HTTPS --name listener1 --default-tls-container=$(openstack secret container list | awk '/ tls_container1 / {print $2}') --sni-container_refs $(openstack secret container list | awk '/ tls_container1 / {print $2}') $(openstack secret container list | awk '/ tls_container2 / {print $2}')
neutron lbaas-pool-create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
neutron lbaas-member-create --subnet private-subnet --address 192.0.2.10 --protocol-port 80 pool1
neutron lbaas-member-create --subnet private-subnet --address 192.0.2.11 --protocol-port 80 pool1
Deploy HTTP and TLS-terminated HTTPS load balancing on the same IP and backend
------------------------------------------------------------------------------
This example is exactly like :ref:`basic-tls-terminated-listener-neutron`,
except that we would like to have both an HTTP and TERMINATED_HTTPS listener
that use the same back-end pool (and therefore, probably respond with the
exact same content regardless of whether the web client uses the HTTP or HTTPS
protocol to connect).
Please note that if you wish all HTTP requests to be redirected to HTTPS (so
that requests are only served via HTTPS, and attempts to access content over
HTTP just get redirected to the HTTPS listener), then please see `the example
<l7-cookbook-neutron.html#redirect-http-to-https-n>`__ in the
:doc:`l7-cookbook-neutron`.
**Scenario description**:
* Back-end servers 192.0.2.10 and 192.0.2.11 on subnet *private-subnet* have
been configured with regular HTTP application on TCP port 80.
* These back-end servers have been configured with a health check at the URL
path "/healthcheck". See :ref:`http-heath-monitors-neutron` below.
* Subnet *public-subnet* is a shared external subnet created by the cloud
operator which is reachable from the internet.
* A TLS certificate, key, and intermediate certificate chain for
www.example.com have been obtained from an external certificate authority.
These now exist in the files server.crt, server.key, and ca-chain.p7b in the
current directory. The key and certificate are PEM-encoded, and the
intermediate certificate chain is PKCS7 PEM encoded. The key is not encrypted
with a passphrase.
* The *admin* user on this cloud installation has keystone ID *admin_id*
* We want to configure a TLS-terminated HTTPS load balancer that is accessible
from the internet using the key and certificate mentioned above, which
distributes requests to the back-end servers over the non-encrypted HTTP
protocol.
* We also want to configure a HTTP load balancer on the same IP address as
the above which serves the exact same content (ie. forwards to the same
back-end pool) as the TERMINATED_HTTPS listener.
**Solution**:
1. Create barbican *secret* resources for the certificate, key, and
intermediate certificate chain. We will call these *cert1*, *key1*, and
*intermediates1* respectively.
2. Create a *secret container* resource combining all of the above. We will
call this *tls_container1*.
3. Grant the *admin* user access to all the *secret* and *secret container*
barbican resources above.
4. Create load balancer *lb1* on subnet *public-subnet*.
5. Create listener *listener1* as a TERMINATED_HTTPS listener referencing
*tls_container1* as its default TLS container.
6. Create pool *pool1* as *listener1*'s default pool.
7. Add members 192.0.2.10 and 192.0.2.11 on *private-subnet* to *pool1*.
8. Create listener *listener2* as an HTTP listener with *pool1* as its
default pool.
**CLI commands**:
::
openstack secret store --name='cert1' --payload-content-type='text/plain' --payload="$(cat server.crt)"
openstack secret store --name='key1' --payload-content-type='text/plain' --payload="$(cat server.key)"
openstack secret store --name='intermediates1' --payload-content-type='text/plain' --payload="$(cat ca-chain.p7b)"
openstack secret container create --name='tls_container1' --type='certificate' --secret="certificate=$(openstack secret list | awk '/ cert1 / {print $2}')" --secret="private_key=$(openstack secret list | awk '/ key1 / {print $2}')" --secret="intermediates=$(openstack secret list | awk '/ intermediates1 / {print $2}')"
neutron lbaas-loadbalancer-create --name lb1 public-subnet
# Re-run the following until lb1 shows ACTIVE and ONLINE statuses:
neutron lbaas-loadbalancer-show lb1
neutron lbaas-listener-create --loadbalancer lb1 --protocol-port 443 --protocol TERMINATED_HTTPS --name listener1 --default-tls-container=$(openstack secret container list | awk '/ tls_container1 / {print $2}')
neutron lbaas-pool-create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
neutron lbaas-member-create --subnet private-subnet --address 192.0.2.10 --protocol-port 80 pool1
neutron lbaas-member-create --subnet private-subnet --address 192.0.2.11 --protocol-port 80 pool1
neutron lbaas-listener-create --name listener2 --loadbalancer lb1 --protocol HTTP --protocol-port 80 --default-pool pool1
.. _heath-monitor-best-practices-neutron:
Heath Monitor Best Practices
============================
While it is possible to set up a listener without a health monitor, if a
back-end pool member goes down, Octavia will not remove the failed server from
the pool until a considerable time has passed. This can lead to service
disruption for web clients. Because of this, we recommend always configuring
production load balancers to use a health monitor.
The health monitor itself is a process that does periodic health checks on each
back-end server to pre-emptively detect failed servers and temporarily pull
them out of the pool. Since effective health monitors depend as much on
back-end application server configuration as proper load balancer
configuration, some additional discussion of best practices is warranted here.
See also: `Octavia API Reference <https://developer.openstack.org/api-ref/load-balancer/>`_
Heath monitor options
---------------------
All of the health monitors Octavia supports have the following configurable
options:
* ``delay``: Number of seconds to wait between health checks.
* ``timeout``: Number of seconds to wait for any given health check to
complete. ``timeout`` should always be smaller than ``delay``.
* ``max-retries``: Number of subsequent health checks a given back-end
server must fail before it is considered *down*, or that a failed back-end
server must pass to be considered *up* again.
.. _http-heath-monitors-neutron:
HTTP health monitors
--------------------
In general, the application-side component of HTTP health checks are a part of
the web application being load balanced. By default, Octavia will probe the "/"
path on the application server. However, in many applications this is not
appropriate because the "/" path ends up being a cached page, or causes the
application server to do more work than is necessary for a basic health check.
In addition to the above options, HTTP health monitors also have the following
options:
* ``url_path``: Path part of the URL that should be retrieved from the back-end
server. By default this is "/".
* ``http_method``: HTTP method that should be used to retrieve the
``url_path``. By default this is "GET".
* ``expected_codes``: List of HTTP status codes that indicate an OK health
check. By default this is just "200".
Please keep the following best practices in mind when writing the code that
generates the health check in your web application:
* The health monitor ``url_path`` should not require authentication to load.
* By default the health monitor ``url_path`` should return a HTTP 200 OK status
code to indicate a healthy server unless you specify alternate
``expected_codes``.
* The health check should do enough internal checks to ensure the application
is healthy and no more. This may mean ensuring database or other external
storage connections are up and running, server load is acceptable, the site
is not in maintenance mode, and other tests specific to your application.
* The page generated by the health check should be very light weight:
* It should return in a sub-second interval.
* It should not induce significant load on the application server.
* The page generated by the health check should never be cached, though the
code running the health check may reference cached data. For example, you may
find it useful to run a more extensive health check via cron and store the
results of this to disk. The code generating the page at the health monitor
``url_path`` would incorporate the results of this cron job in the tests it
performs.
* Since Octavia only cares about the HTTP status code returned, and since
health checks are run so frequently, it may make sense to use the "HEAD" or
"OPTIONS" HTTP methods to cut down on unnecessary processing of a whole page.
Other heath monitors
--------------------
Other health monitor types include ``PING``, ``TCP``, ``HTTPS``, and
``TLS-HELLO``.
``PING`` health monitors send periodic ICMP PING requests to the back-end
servers. Obviously, your back-end servers must be configured to allow PINGs in
order for these health checks to pass.
``TCP`` health monitors open a TCP connection to the back-end server's protocol
port. Your custom TCP application should be written to respond OK to the load
balancer connecting, opening a TCP connection, and closing it again after the
TCP handshake without sending any data.
``HTTPS`` health monitors operate exactly like HTTP health monitors, but with
ssl back-end servers. Unfortunately, this causes problems if the servers are
performing client certificate validation, as HAProxy won't have a valid cert.
In this case, using ``TLS-HELLO`` type monitoring is an alternative.
``TLS-HELLO`` health monitors simply ensure the back-end server responds to
SSLv3 client hello messages. It will not check any other health metrics, like
status code or body contents.
Intermediate certificate chains
===============================
Some TLS certificates require you to install an intermediate certificate chain
in order for web client browsers to trust the certificate. This chain can take
several forms, and is a file provided by the organization from whom you
obtained your TLS certificate.
PEM-encoded chains
------------------
The simplest form of the intermediate chain is a PEM-encoded text file that
either contains a sequence of individually-encoded PEM certificates, or a PEM
encoded PKCS7 block(s). If this is the type of intermediate chain you have been
provided, the file will contain either ``-----BEGIN PKCS7-----`` or
``-----BEGIN CERTIFICATE-----`` near the top of the file, and one or more
blocks of 64-character lines of ASCII text (that will look like gobbedlygook to
a human). These files are also typically named with a ``.crt`` or ``.pem``
extension.
To upload this type of intermediates chain to barbican, run a command similar
to the following (assuming "intermediates-chain.pem" is the name of the file):
::
openstack secret store --name='intermediates1' --payload-content-type='text/plain' --payload="$(cat intermediates-chain.pem)"
DER-encoded chains
------------------
If the intermediates chain provided to you is a file that contains what appears
to be random binary data, it is likely that it is a PKCS7 chain in DER format.
These files also may be named with a ``.p7b`` extension. In order to use this
intermediates chain, you can either convert it to a series of PEM-encoded
certificates with the following command:
::
openssl pkcs7 -in intermediates-chain.p7b -inform DER -print_certs -out intermediates-chain.pem
...or convert it into a PEM-encoded PKCS7 bundle with the following command:
::
openssl pkcs7 -in intermediates-chain.p7b -inform DER -outform PEM -out intermediates-chain.pem
...or simply upload the binary DER file to barbican without conversion:
::
openstack secret store --name='intermediates1' --payload-content-type='application/octet-stream' --payload-content-encoding='base64' --payload="$(cat intermediates-chain.p7b | base64)"
In any case, if the file is not a PKCS7 DER bundle, then either of the above
two openssl commands will fail.
Further reading
===============
For examples of using Layer 7 features for more advanced load balancing, please
see: :doc:`l7-cookbook-neutron`

View File

@ -1,361 +0,0 @@
..
Copyright (c) 2016 IBM
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
==================================================
Layer 7 Cookbook Using Neutron Client (deprecated)
==================================================
.. warning:: The neutron client used in this document is deprecated. We
strongly encourage you to use the OpenStack Client and Octavia
OpenStack Client plugin instead. This document is being maintained
for deployments still using neutron-lbaas and the neutron client.
Introduction
============
This document gives several examples of common L7 load balancer usage. For a
description of L7 load balancing see: :doc:`l7`
For the purposes of this guide we assume that the neutron command-line
interface is going to be used to configure all features of Neutron LBaaS with
an Octavia back-end. Also, in order to keep these examples short, we assume
that many non-L7 configuration tasks (such as deploying loadbalancers,
listeners, pools, members, healthmonitors, etc.) have already been
accomplished. A description of the starting conditions is given in each example
below.
Examples
========
.. _redirect-http-to-https-n:
Redirect *http://www.example.com/* to *https://www.example.com/*
----------------------------------------------------------------
**Scenario description**:
* Load balancer *lb1* has been set up with ``TERMINATED_HTTPS`` listener
*tls_listener* on TCP port 443.
* *tls_listener* has been populated with a default pool, members, etc.
* *tls_listener* is available under the DNS name *https://www.example.com/*
* We want any regular HTTP requests to TCP port 80 on *lb1* to be redirected
to *tls_listener* on TCP port 443.
**Solution**:
1. Create listener *http_listener* as an HTTP listener on *lb1* port 80.
2. Set up an L7 Policy *policy1* on *http_listener* with action
``REDIRECT_TO_URL`` pointed at the URL *https://www.example.com/*
3. Add an L7 Rule to *policy1* which matches all requests.
**CLI commands**:
::
neutron lbaas-listener-create --name http_listener --loadbalancer lb1 --protocol HTTP --protocol-port 80
neutron lbaas-l7policy-create --action REDIRECT_TO_URL --redirect-url https://www.example.com/ --listener http_listener --name policy1
neutron lbaas-l7rule-create --type PATH --compare-type STARTS_WITH --value / policy1
.. _send-requests-to-static-pool-n:
Send requests starting with /js or /images to *static_pool*
-----------------------------------------------------------
**Scenario description**:
* Listener *listener1* on load balancer *lb1* is set up to send all requests to
its default_pool *pool1*.
* We are introducing static content servers 10.0.0.10 and 10.0.0.11 on subnet
*private-subnet*, and want any HTTP requests with a URL that starts with
either "/js" or "/images" to be sent to those two servers instead of *pool1*.
**Solution**:
1. Create pool *static_pool* on *lb1*.
2. Populate *static_pool* with the new back-end members.
3. Create L7 Policy *policy1* with action ``REDIRECT_TO_POOL`` pointed at
*static_pool*.
4. Create an L7 Rule on *policy1* which looks for "/js" at the start of
the request path.
5. Create L7 Policy *policy2* with action ``REDIRECT_TO_POOL`` pointed at
*static_pool*.
6. Create an L7 Rule on *policy2* which looks for "/images" at the start
of the request path.
**CLI commands**:
::
neutron lbaas-pool-create --name static_pool --lb-algorithm ROUND_ROBIN --loadbalancer lb1 --protocol HTTP
neutron lbaas-member-create --subnet private-subnet --address 10.0.0.10 --protocol-port 80 static_pool
neutron lbaas-member-create --subnet private-subnet --address 10.0.0.11 --protocol-port 80 static_pool
neutron lbaas-l7policy-create --action REDIRECT_TO_POOL --redirect-pool static_pool --listener listener1 --name policy1
neutron lbaas-l7rule-create --type PATH --compare-type STARTS_WITH --value /js policy1
neutron lbaas-l7policy-create --action REDIRECT_TO_POOL --redirect-pool static_pool --listener listener1 --name policy2
neutron lbaas-l7rule-create --type PATH --compare-type STARTS_WITH --value /images policy2
**Alternate solution** (using regular expressions):
1. Create pool *static_pool* on *lb1*.
2. Populate *static_pool* with the new back-end members.
3. Create L7 Policy *policy1* with action ``REDIRECT_TO_POOL`` pointed at
*static_pool*.
4. Create an L7 Rule on *policy1* which uses a regular expression to match
either "/js" or "/images" at the start of the request path.
**CLI commands**:
::
neutron lbaas-pool-create --name static_pool --lb-algorithm ROUND_ROBIN --loadbalancer lb1 --protocol HTTP
neutron lbaas-member-create --subnet private-subnet --address 10.0.0.10 --protocol-port 80 static_pool
neutron lbaas-member-create --subnet private-subnet --address 10.0.0.11 --protocol-port 80 static_pool
neutron lbaas-l7policy-create --action REDIRECT_TO_POOL --redirect-pool static_pool --listener listener1 --name policy1
neutron lbaas-l7rule-create --type PATH --compare-type REGEX --value '^/(js|images)' policy1
Send requests for *http://www2.example.com/* to *pool2*
-------------------------------------------------------
**Scenario description**:
* Listener *listener1* on load balancer *lb1* is set up to send all requests to
its default_pool *pool1*.
* We have set up a new pool *pool2* on *lb1* and want any requests using the
HTTP/1.1 hostname *www2.example.com* to be sent to *pool2* instead.
**Solution**:
1. Create L7 Policy *policy1* with action ``REDIRECT_TO_POOL`` pointed at
*pool2*.
2. Create an L7 Rule on *policy1* which matches the hostname
*www2.example.com*.
**CLI commands**:
::
neutron lbaas-l7policy-create --action REDIRECT_TO_POOL --redirect-pool pool2 --listener listener1 --name policy1
neutron lbaas-l7rule-create --type HOST_NAME --compare-type EQUAL_TO --value www2.example.com policy1
Send requests for *\*.example.com* to *pool2*
---------------------------------------------
**Scenario description**:
* Listener *listener1* on load balancer *lb1* is set up to send all requests to
its default_pool *pool1*.
* We have set up a new pool *pool2* on *lb1* and want any requests using any
HTTP/1.1 hostname like *\*.example.com* to be sent to *pool2* instead.
**Solution**:
1. Create L7 Policy *policy1* with action ``REDIRECT_TO_POOL`` pointed at
*pool2*.
2. Create an L7 Rule on *policy1* which matches any hostname that ends with
*example.com*.
**CLI commands**:
::
neutron lbaas-l7policy-create --action REDIRECT_TO_POOL --redirect-pool pool2 --listener listener1 --name policy1
neutron lbaas-l7rule-create --type HOST_NAME --compare-type ENDS_WITH --value example.com policy1
Send unauthenticated users to *login_pool* (scenario 1)
-------------------------------------------------------
**Scenario description**:
* ``TERMINATED_HTTPS`` listener *listener1* on load balancer *lb1* is set up
to send all requests to its default_pool *pool1*.
* The site behind *listener1* requires all web users to authenticate, after
which a browser cookie *auth_token* will be set.
* When web users log out, or if the *auth_token* is invalid, the application
servers in *pool1* clear the *auth_token*.
* We want to introduce new secure authentication server 10.0.1.10 on Neutron
subnet *secure_subnet* (a different Neutron subnet from the default
application servers) which handles authenticating web users and sets the
*auth_token*.
*Note:* Obviously, to have a more secure authentication system that is less
vulnerable to attacks like XSS, the new secure authentication server will need
to set session variables to which the default_pool servers will have access
outside the data path with the web client. There may be other security concerns
as well. This example is not meant to address how these are to be
accomplished--it's mainly meant to show how L7 application routing can be done
based on a browser cookie.
**Solution**:
1. Create pool *login_pool* on *lb1*.
2. Add member 10.0.1.10 on *secure_subnet* to *login_pool*.
3. Create L7 Policy *policy1* with action ``REDIRECT_TO_POOL`` pointed at
*login_pool*.
4. Create an L7 Rule on *policy1* which looks for browser cookie *auth_token*
(with any value) and matches if it is *NOT* present.
**CLI commands**:
::
neutron lbaas-pool-create --name login_pool --lb-algorithm ROUND_ROBIN --loadbalancer lb1 --protocol HTTP
neutron lbaas-member-create --subnet secure_subnet --address 10.0.1.10 --protocol-port 80 login_pool
neutron lbaas-l7policy-create --action REDIRECT_TO_POOL --redirect-pool login_pool --listener listener1 --name policy1
neutron lbaas-l7rule-create --type COOKIE --key auth_token --compare-type REGEX --value '.*' --invert policy1
Send unauthenticated users to *login_pool* (scenario 2)
--------------------------------------------------------
**Scenario description**:
* ``TERMINATED_HTTPS`` listener *listener1* on load balancer *lb1* is set up
to send all requests to its default_pool *pool1*.
* The site behind *listener1* requires all web users to authenticate, after
which a browser cookie *auth_token* will be set.
* When web users log out, or if the *auth_token* is invalid, the application
servers in *pool1* set *auth_token* to the literal string "INVALID".
* We want to introduce new secure authentication server 10.0.1.10 on Neutron
subnet *secure_subnet* (a different Neutron subnet from the default
application servers) which handles authenticating web users and sets the
*auth_token*.
*Note:* Obviously, to have a more secure authentication system that is less
vulnerable to attacks like XSS, the new secure authentication server will need
to set session variables to which the default_pool servers will have access
outside the data path with the web client. There may be other security concerns
as well. This example is not meant to address how these are to be
accomplished-- it's mainly meant to show how L7 application routing can be done
based on a browser cookie.
**Solution**:
1. Create pool *login_pool* on *lb1*.
2. Add member 10.0.1.10 on *secure_subnet* to *login_pool*.
3. Create L7 Policy *policy1* with action ``REDIRECT_TO_POOL`` pointed at
*login_pool*.
4. Create an L7 Rule on *policy1* which looks for browser cookie *auth_token*
(with any value) and matches if it is *NOT* present.
5. Create L7 Policy *policy2* with action ``REDIRECT_TO_POOL`` pointed at
*login_pool*.
6. Create an L7 Rule on *policy2* which looks for browser cookie *auth_token*
and matches if it is equal to the literal string "INVALID".
**CLI commands**:
::
neutron lbaas-pool-create --name login_pool --lb-algorithm ROUND_ROBIN --loadbalancer lb1 --protocol HTTP
neutron lbaas-member-create --subnet secure_subnet --address 10.0.1.10 --protocol-port 80 login_pool
neutron lbaas-l7policy-create --action REDIRECT_TO_POOL --redirect-pool login_pool --listener listener1 --name policy1
neutron lbaas-l7rule-create --type COOKIE --key auth_token --compare-type REGEX --value '.*' --invert policy1
neutron lbaas-l7policy-create --action REDIRECT_TO_POOL --redirect-pool login_pool --listener listener1 --name policy2
neutron lbaas-l7rule-create --type COOKIE --key auth_token --compare-type EQUAL_TO --value INVALID policy2
Send requests for *http://api.example.com/api* to *api_pool*
------------------------------------------------------------
**Scenario description**:
* Listener *listener1* on load balancer *lb1* is set up to send all requests
to its default_pool *pool1*.
* We have created pool *api_pool* on *lb1*, however, for legacy business logic
reasons, we only want requests sent to this pool if they match the hostname
*api.example.com* AND the request path starts with */api*.
**Solution**:
1. Create L7 Policy *policy1* with action ``REDIRECT_TO_POOL`` pointed at
*api_pool*.
2. Create an L7 Rule on *policy1* which matches the hostname *api.example.com*.
3. Create an L7 Rule on *policy1* which matches */api* at the start of the
request path. (This rule will be logically ANDed with the previous rule.)
**CLI commands**:
::
neutron lbaas-l7policy-create --action REDIRECT_TO_POOL --redirect-pool api_pool --listener listener1 --name policy1
neutron lbaas-l7rule-create --type HOST_NAME --compare-type EQUAL_TO --value api.example.com policy1
neutron lbaas-l7rule-create --type PATH --compare-type STARTS_WITH --value /api policy1
Set up A/B testing on an existing production site using a cookie
----------------------------------------------------------------
**Scenario description**:
* Listener *listener1* on load balancer *lb1* is a production site set up as
described under :ref:`send-requests-to-static-pool-n` (alternate solution)
above. Specifically:
* HTTP requests with a URL that starts with either "/js" or "/images" are
sent to pool *static_pool*.
* All other requests are sent to *listener1's* default_pool *pool1*.
* We are introducing a "B" version of the production site, complete with its
own default_pool and static_pool. We will call these *pool_B* and
*static_pool_B* respectively.
* The *pool_B* members should be 10.0.0.50 and 10.0.0.51, and the
*static_pool_B* members should be 10.0.0.100 and 10.0.0.101 on subnet
*private-subnet*.
* Web clients which should be routed to the "B" version of the site get a
cookie set by the member servers in *pool1*. This cookie is called
"site_version" and should have the value "B".
**Solution**:
1. Create pool *pool_B* on *lb1*.
2. Populate *pool_B* with its new back-end members.
3. Create pool *static_pool_B* on *lb1*.
4. Populate *static_pool_B* with its new back-end members.
5. Create L7 Policy *policy2* with action ``REDIRECT_TO_POOL`` pointed at
*static_pool_B*. This should be inserted at position 1.
6. Create an L7 Rule on *policy2* which uses a regular expression to match
either "/js" or "/images" at the start of the request path.
7. Create an L7 Rule on *policy2* which matches the cookie "site_version" to
the exact string "B".
8. Create L7 Policy *policy3* with action ``REDIRECT_TO_POOL`` pointed at
*pool_B*. This should be inserted at position 2.
9. Create an L7 Rule on *policy3* which matches the cookie "site_version" to
the exact string "B".
*A word about L7 Policy position*: Since L7 Policies are evaluated in order
according to their position parameter, and since the first L7 Policy whose L7
Rules all evaluate to True is the one whose action is followed, it is important
that L7 Policies with the most specific rules get evaluated first.
For example, in this solution, if *policy3* were to appear in the listener's L7
Policy list before *policy2* (that is, if *policy3* were to have a lower
position number than *policy2*), then if a web client were to request the URL
http://www.example.com/images/a.jpg with the cookie "site_version:B", then
*policy3* would match, and the load balancer would send the request to
*pool_B*. From the scenario description, this request clearly was meant to be
sent to *static_pool_B*, which is why *policy2* needs to be evaluated before
*policy3*.
**CLI commands**:
::
neutron lbaas-pool-create --name pool_B --lb-algorithm ROUND_ROBIN --loadbalancer lb1 --protocol HTTP
neutron lbaas-member-create --subnet private-subnet --address 10.0.0.50 --protocol-port 80 pool_B
neutron lbaas-member-create --subnet private-subnet --address 10.0.0.51 --protocol-port 80 pool_B
neutron lbaas-pool-create --name static_pool_B --lb-algorithm ROUND_ROBIN --loadbalancer lb1 --protocol HTTP
neutron lbaas-member-create --subnet private-subnet --address 10.0.0.100 --protocol-port 80 static_pool_B
neutron lbaas-member-create --subnet private-subnet --address 10.0.0.101 --protocol-port 80 static_pool_B
neutron lbaas-l7policy-create --action REDIRECT_TO_POOL --redirect-pool static_pool_B --listener listener1 --name policy2 --position 1
neutron lbaas-l7rule-create --type PATH --compare-type REGEX --value '^/(js|images)' policy2
neutron lbaas-l7rule-create --type COOKIE --key site_version --compare-type EQUAL_TO --value B policy2
neutron lbaas-l7policy-create --action REDIRECT_TO_POOL --redirect-pool pool_B --listener listener1 --name policy3 --position 2
neutron lbaas-l7rule-create --type COOKIE --key site_version --compare-type EQUAL_TO --value B policy3

View File

@ -10,8 +10,6 @@ Cookbooks
guides/basic-cookbook
guides/l7-cookbook
guides/basic-cookbook-neutron
guides/l7-cookbook-neutron
Guides
======

View File

@ -19,7 +19,6 @@
[api_settings]
# bind_host = 127.0.0.1
# bind_port = 9876
# api_handler = queue_producer
# How should authentication be handled (keystone, noauth)
# auth_strategy = keystone
@ -33,10 +32,6 @@
# api_base_uri = http://localhost:9876
# api_base_uri =
# Enable/disable exposing API endpoints. By default, both v1 and v2 are enabled.
# api_v1_enabled = True
# api_v2_enabled = True
# Enable/disable ability for users to create TLS Terminated listeners
# allow_tls_terminated_listeners = True
@ -95,14 +90,6 @@
# health_update_driver = health_db
# stats_update_driver = stats_db
# EventStreamer options are
# queue_event_streamer,
# noop_event_streamer
# event_streamer_driver = noop_event_streamer
# Enable provisioning status sync with neutron db
# sync_provisioning_status = False
[keystone_authtoken]
# This group of config options are imported from keystone middleware. Thus the
# option names should match the names declared in the middleware.
@ -285,17 +272,6 @@
# Topic (i.e. Queue) Name
# topic = octavia_prov
# Topic for octavia's events sent to a queue
# event_stream_topic = neutron_lbaas_event
# Transport URL to use for the neutron-lbaas synchronization event stream
# when neutron and octavia have separate queues.
# For Single Host, specify one full transport URL:
# event_stream_transport_url = rabbit://<user>:<pass>@127.0.0.1:5672/<vhost>
# For HA, specify queue nodes in cluster, comma delimited:
# event_stream_transport_url = rabbit://<user>:<pass>@server01,<user>:<pass>@server02/<vhost>
# event_stream_transport_url =
[house_keeping]
# Interval in seconds to initiate spare amphora checks
# spare_check_interval = 30

View File

@ -1,11 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,67 +0,0 @@
# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import six
@six.add_metaclass(abc.ABCMeta)
class BaseObjectHandler(object):
"""Base class for any object handler."""
@abc.abstractmethod
def create(self, model_id):
"""Begins process of actually creating data_model."""
pass
@abc.abstractmethod
def update(self, model_id, updated_dict):
"""Begins process of actually updating data_model."""
pass
@abc.abstractmethod
def delete(self, model_id):
"""Begins process of actually deleting data_model."""
pass
class NotImplementedObjectHandler(BaseObjectHandler):
"""Default Object Handler to force implementation of subclasses.
Helper class to make any subclass of AbstractHandler explode if it
is missing any of the required object managers.
"""
@staticmethod
def update(model_id, updated_dict):
raise NotImplementedError()
@staticmethod
def delete(model_id):
raise NotImplementedError()
@staticmethod
def create(model_id):
raise NotImplementedError()
@six.add_metaclass(abc.ABCMeta)
class BaseHandler(object):
"""Base class for all handlers."""
load_balancer = NotImplementedObjectHandler()
listener = NotImplementedObjectHandler()
pool = NotImplementedObjectHandler()
health_monitor = NotImplementedObjectHandler()
member = NotImplementedObjectHandler()
l7policy = NotImplementedObjectHandler()
l7rule = NotImplementedObjectHandler()

View File

@ -1,11 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,478 +0,0 @@
# Copyright 2014 Rackspace
# Copyright 2016 Blue Box, an IBM Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
This is just a handler that will simulate successful operations a controller
should perform. There is nothing useful about this other than database
entity status management.
"""
import threading
import time
from oslo_log import log as logging
from octavia.api.handlers import abstract_handler
from octavia.common import constants
from octavia.common import data_models
from octavia.db import api as db_api
import octavia.db.repositories as repos
LOG = logging.getLogger(__name__)
ASYNC_TIME = 1
def validate_input(expected, actual):
if not isinstance(actual, expected):
raise InvalidHandlerInputObject(obj_type=actual.__class__)
def simulate_controller(data_model, delete=False, update=False, create=False,
batch_update=False):
"""Simulates a successful controller operator for a data model.
:param data_model: data model to simulate controller operation
:param delete: deletes from the database
"""
repo = repos.Repositories()
def member_controller(member, delete=False, update=False, create=False,
batch_update=False):
time.sleep(ASYNC_TIME)
LOG.info("Simulating controller operation for member...")
db_mem = None
if delete:
db_mem = repo.member.get(db_api.get_session(), id=member.id)
repo.member.delete(db_api.get_session(), id=member.id)
elif update:
db_mem = repo.member.get(db_api.get_session(), id=member.id)
member_dict = member.to_dict()
member_dict['operating_status'] = db_mem.operating_status
repo.member.update(db_api.get_session(), member.id, **member_dict)
elif create:
repo.member.update(db_api.get_session(), member.id,
operating_status=constants.ONLINE)
elif batch_update:
members = member
for m in members:
repo.member.update(db_api.get_session(), m.id,
operating_status=constants.ONLINE)
listeners = []
if db_mem:
for listener in db_mem.pool.listeners:
if listener not in listeners:
listeners.append(listener)
if member.pool.listeners:
for listener in member.pool.listeners:
if listener not in listeners:
listeners.append(listener)
if listeners:
for listener in listeners:
repo.listener.update(db_api.get_session(), listener.id,
operating_status=constants.ONLINE,
provisioning_status=constants.ACTIVE)
repo.load_balancer.update(db_api.get_session(),
member.pool.load_balancer.id,
operating_status=constants.ONLINE,
provisioning_status=constants.ACTIVE)
LOG.info("Simulated Controller Handler Thread Complete")
def l7policy_controller(l7policy, delete=False, update=False,
create=False):
time.sleep(ASYNC_TIME)
LOG.info("Simulating controller operation for l7policy...")
db_l7policy = None
if delete:
db_l7policy = repo.l7policy.get(db_api.get_session(),
id=l7policy.id)
repo.l7policy.delete(db_api.get_session(), id=l7policy.id)
elif update:
db_l7policy = repo.l7policy.get(db_api.get_session(),
id=l7policy.id)
l7policy_dict = l7policy.to_dict()
repo.l7policy.update(db_api.get_session(), l7policy.id,
**l7policy_dict)
elif create:
db_l7policy = repo.l7policy.create(db_api.get_session(),
**l7policy.to_dict())
if db_l7policy.listener:
repo.listener.update(db_api.get_session(), db_l7policy.listener.id,
operating_status=constants.ONLINE,
provisioning_status=constants.ACTIVE)
repo.load_balancer.update(db_api.get_session(),
db_l7policy.listener.load_balancer.id,
operating_status=constants.ONLINE,
provisioning_status=constants.ACTIVE)
LOG.info("Simulated Controller Handler Thread Complete")
def l7rule_controller(l7rule, delete=False, update=False, create=False):
time.sleep(ASYNC_TIME)
LOG.info("Simulating controller operation for l7rule...")
db_l7rule = None
if delete:
db_l7rule = repo.l7rule.get(db_api.get_session(), id=l7rule.id)
repo.l7rule.delete(db_api.get_session(), id=l7rule.id)
elif update:
db_l7rule = repo.l7rule.get(db_api.get_session(), id=l7rule.id)
l7rule_dict = l7rule.to_dict()
repo.l7rule.update(db_api.get_session(), l7rule.id, **l7rule_dict)
elif create:
l7rule_dict = l7rule.to_dict()
db_l7rule = repo.l7rule.create(db_api.get_session(), **l7rule_dict)
if db_l7rule.l7policy.listener:
listener = db_l7rule.l7policy.listener
repo.listener.update(db_api.get_session(), listener.id,
operating_status=constants.ONLINE,
provisioning_status=constants.ACTIVE)
repo.load_balancer.update(db_api.get_session(),
listener.load_balancer.id,
operating_status=constants.ONLINE,
provisioning_status=constants.ACTIVE)
LOG.info("Simulated Controller Handler Thread Complete")
def health_monitor_controller(health_monitor, delete=False, update=False,
create=False):
time.sleep(ASYNC_TIME)
LOG.info("Simulating controller operation for health monitor...")
db_hm = None
if delete:
db_hm = repo.health_monitor.get(db_api.get_session(),
pool_id=health_monitor.pool.id)
repo.health_monitor.delete(db_api.get_session(),
pool_id=health_monitor.pool.id)
elif update:
db_hm = repo.health_monitor.get(db_api.get_session(),
pool_id=health_monitor.pool_id)
hm_dict = health_monitor.to_dict()
hm_dict['operating_status'] = db_hm.operating_status()
repo.health_monitor.update(db_api.get_session(), **hm_dict)
elif create:
repo.pool.update(db_api.get_session(), health_monitor.pool_id,
operating_status=constants.ONLINE)
listeners = []
if db_hm:
for listener in db_hm.pool.listeners:
if listener not in listeners:
listeners.append(listener)
if health_monitor.pool.listeners:
for listener in health_monitor.pool.listeners:
if listener not in listeners:
listeners.append(listener)
if listeners:
for listener in listeners:
repo.test_and_set_lb_and_listener_prov_status(
db_api.get_session(),
health_monitor.pool.load_balancer.id,
listener.id, constants.ACTIVE,
constants.ACTIVE)
repo.listener.update(db_api.get_session(),
listener.id,
operating_status=constants.ONLINE,
provisioning_status=constants.ACTIVE)
repo.load_balancer.update(
db_api.get_session(),
health_monitor.pool.load_balancer.id,
operating_status=constants.ONLINE,
provisioning_status=constants.ACTIVE)
LOG.info("Simulated Controller Handler Thread Complete")
def pool_controller(pool, delete=False, update=False, create=False):
time.sleep(ASYNC_TIME)
LOG.info("Simulating controller operation for pool...")
db_pool = None
if delete:
db_pool = repo.pool.get(db_api.get_session(), id=pool.id)
repo.pool.delete(db_api.get_session(), id=pool.id)
elif update:
db_pool = repo.pool.get(db_api.get_session(), id=pool.id)
pool_dict = pool.to_dict()
pool_dict['operating_status'] = db_pool.operating_status
repo.update_pool_and_sp(db_api.get_session(), pool.id, pool_dict)
elif create:
repo.pool.update(db_api.get_session(), pool.id,
operating_status=constants.ONLINE)
listeners = []
if db_pool:
for listener in db_pool.listeners:
if listener not in listeners:
listeners.append(listener)
if pool.listeners:
for listener in pool.listeners:
if listener not in listeners:
listeners.append(listener)
if listeners:
for listener in listeners:
repo.listener.update(db_api.get_session(), listener.id,
operating_status=constants.ONLINE,
provisioning_status=constants.ACTIVE)
repo.load_balancer.update(db_api.get_session(),
pool.load_balancer.id,
operating_status=constants.ONLINE,
provisioning_status=constants.ACTIVE)
LOG.info("Simulated Controller Handler Thread Complete")
def listener_controller(listener, delete=False, update=False,
create=False):
time.sleep(ASYNC_TIME)
LOG.info("Simulating controller operation for listener...")
if delete:
repo.listener.update(db_api.get_session(), listener.id,
operating_status=constants.OFFLINE,
provisioning_status=constants.DELETED)
elif update:
db_listener = repo.listener.get(db_api.get_session(),
id=listener.id)
listener_dict = listener.to_dict()
listener_dict['operating_status'] = db_listener.operating_status
repo.listener.update(db_api.get_session(), listener.id,
**listener_dict)
elif create:
repo.listener.update(db_api.get_session(), listener.id,
operating_status=constants.ONLINE,
provisioning_status=constants.ACTIVE)
repo.load_balancer.update(db_api.get_session(),
listener.load_balancer.id,
operating_status=constants.ONLINE,
provisioning_status=constants.ACTIVE)
LOG.info("Simulated Controller Handler Thread Complete")
def loadbalancer_controller(loadbalancer, delete=False, update=False,
create=False, failover=False):
time.sleep(ASYNC_TIME)
LOG.info("Simulating controller operation for loadbalancer...")
if delete:
repo.load_balancer.update(
db_api.get_session(), id=loadbalancer.id,
operating_status=constants.OFFLINE,
provisioning_status=constants.DELETED)
elif update:
db_lb = repo.listener.get(db_api.get_session(), id=loadbalancer.id)
lb_dict = loadbalancer.to_dict()
lb_dict['operating_status'] = db_lb.operating_status
repo.load_balancer.update(db_api.get_session(), loadbalancer.id,
**lb_dict)
elif create:
repo.load_balancer.update(db_api.get_session(), id=loadbalancer.id,
operating_status=constants.ONLINE,
provisioning_status=constants.ACTIVE)
elif failover:
repo.load_balancer.update(
db_api.get_session(), id=loadbalancer.id,
operating_status=constants.ONLINE,
provisioning_status=constants.PENDING_UPDATE)
LOG.info("Simulated Controller Handler Thread Complete")
controller = loadbalancer_controller
if isinstance(data_model, data_models.Member):
controller = member_controller
elif isinstance(data_model, data_models.HealthMonitor):
controller = health_monitor_controller
elif isinstance(data_model, data_models.Pool):
controller = pool_controller
elif isinstance(data_model, data_models.Listener):
controller = listener_controller
thread = threading.Thread(target=controller, args=(data_model, delete,
update, create))
thread.start()
class InvalidHandlerInputObject(Exception):
message = "Invalid Input Object %(obj_type)s"
def __init__(self, **kwargs):
message = self.message % kwargs
super(InvalidHandlerInputObject, self).__init__(message=message)
class LoadBalancerHandler(abstract_handler.BaseObjectHandler):
def create(self, load_balancer_id):
LOG.info("%(entity)s handling the creation of load balancer %(id)s",
{"entity": self.__class__.__name__, "id": load_balancer_id})
simulate_controller(load_balancer_id, create=True)
def update(self, old_lb, load_balancer):
validate_input(data_models.LoadBalancer, load_balancer)
LOG.info("%(entity)s handling the update of load balancer %(id)s",
{"entity": self.__class__.__name__, "id": old_lb.id})
load_balancer.id = old_lb.id
simulate_controller(load_balancer, update=True)
def delete(self, load_balancer_id):
LOG.info("%(entity)s handling the deletion of load balancer %(id)s",
{"entity": self.__class__.__name__, "id": load_balancer_id})
simulate_controller(load_balancer_id, delete=True)
class ListenerHandler(abstract_handler.BaseObjectHandler):
def create(self, listener_id):
LOG.info("%(entity)s handling the creation of listener %(id)s",
{"entity": self.__class__.__name__, "id": listener_id})
simulate_controller(listener_id, create=True)
def update(self, old_listener, listener):
validate_input(data_models.Listener, listener)
LOG.info("%(entity)s handling the update of listener %(id)s",
{"entity": self.__class__.__name__, "id": old_listener.id})
listener.id = old_listener.id
simulate_controller(listener, update=True)
def delete(self, listener_id):
LOG.info("%(entity)s handling the deletion of listener %(id)s",
{"entity": self.__class__.__name__, "id": listener_id})
simulate_controller(listener_id, delete=True)
class PoolHandler(abstract_handler.BaseObjectHandler):
def create(self, pool_id):
LOG.info("%(entity)s handling the creation of pool %(id)s",
{"entity": self.__class__.__name__, "id": pool_id})
simulate_controller(pool_id, create=True)
def update(self, old_pool, pool):
validate_input(data_models.Pool, pool)
LOG.info("%(entity)s handling the update of pool %(id)s",
{"entity": self.__class__.__name__, "id": old_pool.id})
pool.id = old_pool.id
simulate_controller(pool, update=True)
def delete(self, pool_id):
LOG.info("%(entity)s handling the deletion of pool %(id)s",
{"entity": self.__class__.__name__, "id": pool_id})
simulate_controller(pool_id, delete=True)
class HealthMonitorHandler(abstract_handler.BaseObjectHandler):
def create(self, pool_id):
LOG.info("%(entity)s handling the creation of health monitor "
"on pool %(id)s",
{"entity": self.__class__.__name__, "id": pool_id})
simulate_controller(pool_id, create=True)
def update(self, old_health_monitor, health_monitor):
validate_input(data_models.HealthMonitor, health_monitor)
LOG.info("%(entity)s handling the update of health monitor "
"on pool %(id)s",
{"entity": self.__class__.__name__,
"id": old_health_monitor.pool_id})
health_monitor.pool_id = old_health_monitor.pool_id
simulate_controller(health_monitor, update=True)
def delete(self, pool_id):
LOG.info("%(entity)s handling the deletion of health monitor "
"on pool %(id)s",
{"entity": self.__class__.__name__, "id": pool_id})
simulate_controller(pool_id, delete=True)
class MemberHandler(abstract_handler.BaseObjectHandler):
def create(self, member_id):
LOG.info("%(entity)s handling the creation of member %(id)s",
{"entity": self.__class__.__name__, "id": member_id})
simulate_controller(member_id, create=True)
def update(self, old_member, member):
validate_input(data_models.Member, member)
LOG.info("%(entity)s handling the update of member %(id)s",
{"entity": self.__class__.__name__, "id": old_member.id})
member.id = old_member.id
simulate_controller(member, update=True)
def batch_update(self, old_member_ids, new_member_ids, updated_members):
for m in updated_members:
validate_input(data_models.Member, m)
LOG.info("%(entity)s handling the batch update of members: "
"old=%(old)s, new=%(new)s",
{"entity": self.__class__.__name__, "old": old_member_ids,
"new": new_member_ids})
repo = repos.Repositories()
old_members = [repo.member.get(db_api.get_session(), id=mid)
for mid in old_member_ids]
new_members = [repo.member.get(db_api.get_session(), id=mid)
for mid in new_member_ids]
all_members = []
all_members.extend(old_members)
all_members.extend(new_members)
all_members.extend(updated_members)
simulate_controller(all_members, batch_update=True)
def delete(self, member_id):
LOG.info("%(entity)s handling the deletion of member %(id)s",
{"entity": self.__class__.__name__, "id": member_id})
simulate_controller(member_id, delete=True)
class L7PolicyHandler(abstract_handler.BaseObjectHandler):
def create(self, l7policy_id):
LOG.info("%(entity)s handling the creation of l7policy %(id)s",
{"entity": self.__class__.__name__, "id": l7policy_id})
simulate_controller(l7policy_id, create=True)
def update(self, old_l7policy, l7policy):
validate_input(data_models.L7Policy, l7policy)
LOG.info("%(entity)s handling the update of l7policy %(id)s",
{"entity": self.__class__.__name__, "id": old_l7policy.id})
l7policy.id = old_l7policy.id
simulate_controller(l7policy, update=True)
def delete(self, l7policy_id):
LOG.info("%(entity)s handling the deletion of l7policy %(id)s",
{"entity": self.__class__.__name__, "id": l7policy_id})
simulate_controller(l7policy_id, delete=True)
class L7RuleHandler(abstract_handler.BaseObjectHandler):
def create(self, l7rule):
LOG.info("%(entity)s handling the creation of l7rule %(id)s",
{"entity": self.__class__.__name__, "id": l7rule.id})
simulate_controller(l7rule, create=True)
def update(self, old_l7rule, l7rule):
validate_input(data_models.L7Rule, l7rule)
LOG.info("%(entity)s handling the update of l7rule %(id)s",
{"entity": self.__class__.__name__, "id": old_l7rule.id})
l7rule.id = old_l7rule.id
simulate_controller(l7rule, update=True)
def delete(self, l7rule):
LOG.info("%(entity)s handling the deletion of l7rule %(id)s",
{"entity": self.__class__.__name__, "id": l7rule.id})
simulate_controller(l7rule, delete=True)
class SimulatedControllerHandler(abstract_handler.BaseHandler):
"""Handler that simulates database calls of a successful controller."""
load_balancer = LoadBalancerHandler()
listener = ListenerHandler()
pool = PoolHandler()
health_monitor = HealthMonitorHandler()
member = MemberHandler()
l7policy = L7PolicyHandler()
l7rule = L7RuleHandler()

View File

@ -1,11 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,241 +0,0 @@
# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
from oslo_config import cfg
import oslo_messaging as messaging
import six
from octavia.api.handlers import abstract_handler
from octavia.common import constants
from octavia.common import rpc
cfg.CONF.import_group('oslo_messaging', 'octavia.common.config')
@six.add_metaclass(abc.ABCMeta)
class BaseProducer(abstract_handler.BaseObjectHandler):
"""Base queue producer class."""
@abc.abstractproperty
def payload_class(self):
"""returns a string representing the container class."""
pass
def __init__(self):
topic = cfg.CONF.oslo_messaging.topic
self.target = messaging.Target(
namespace=constants.RPC_NAMESPACE_CONTROLLER_AGENT,
topic=topic, version="1.0", fanout=False)
self.client = rpc.get_client(self.target)
def create(self, model):
"""Sends a create message to the controller via oslo.messaging
:param model:
"""
model_id = getattr(model, 'id', None)
kw = {"{0}_id".format(self.payload_class): model_id}
method_name = "create_{0}".format(self.payload_class)
self.client.cast({}, method_name, **kw)
def update(self, data_model, updated_model):
"""sends an update message to the controller via oslo.messaging
:param updated_model:
:param data_model:
"""
model_id = getattr(data_model, 'id', None)
kw = {"{0}_updates".format(self.payload_class):
updated_model.to_dict(render_unsets=False),
"{0}_id".format(self.payload_class): model_id}
method_name = "update_{0}".format(self.payload_class)
self.client.cast({}, method_name, **kw)
def delete(self, data_model):
"""sends a delete message to the controller via oslo.messaging
:param data_model:
"""
model_id = getattr(data_model, 'id', None)
kw = {"{0}_id".format(self.payload_class): model_id}
method_name = "delete_{0}".format(self.payload_class)
self.client.cast({}, method_name, **kw)
class LoadBalancerProducer(BaseProducer):
"""Sends updates,deletes and creates to the RPC end of the queue consumer
"""
PAYLOAD_CLASS = "load_balancer"
@property
def payload_class(self):
return self.PAYLOAD_CLASS
def delete(self, data_model, cascade):
"""sends a delete message to the controller via oslo.messaging
:param data_model:
:param: cascade: delete listeners, etc. as well
"""
model_id = getattr(data_model, 'id', None)
p_class = self.payload_class
kw = {"{0}_id".format(p_class): model_id, "cascade": cascade}
method_name = "delete_{0}".format(self.payload_class)
self.client.cast({}, method_name, **kw)
def failover(self, data_model):
"""sends a failover message to the controller via oslo.messaging
:param data_model:
"""
model_id = getattr(data_model, 'id', None)
p_class = self.payload_class
kw = {"{0}_id".format(p_class): model_id}
method_name = "failover_{0}".format(self.payload_class)
self.client.cast({}, method_name, **kw)
class AmphoraProducer(BaseProducer):
"""Sends failover messages to the RPC end of the queue consumer
"""
PAYLOAD_CLASS = "amphora"
@property
def payload_class(self):
return self.PAYLOAD_CLASS
def failover(self, data_model):
"""sends a failover message to the controller via oslo.messaging
:param data_model:
"""
model_id = getattr(data_model, 'id', None)
p_class = self.payload_class
kw = {"{0}_id".format(p_class): model_id}
method_name = "failover_{0}".format(self.payload_class)
self.client.cast({}, method_name, **kw)
class ListenerProducer(BaseProducer):
"""Sends updates,deletes and creates to the RPC end of the queue consumer
"""
PAYLOAD_CLASS = "listener"
@property
def payload_class(self):
return self.PAYLOAD_CLASS
class PoolProducer(BaseProducer):
"""Sends updates,deletes and creates to the RPC end of the queue consumer
"""
PAYLOAD_CLASS = "pool"
@property
def payload_class(self):
return self.PAYLOAD_CLASS
class HealthMonitorProducer(BaseProducer):
"""Sends updates,deletes and creates to the RPC end of the queue consumer
"""
PAYLOAD_CLASS = "health_monitor"
@property
def payload_class(self):
return self.PAYLOAD_CLASS
class MemberProducer(BaseProducer):
"""Sends updates,deletes and creates to the RPC end of the queue consumer
"""
PAYLOAD_CLASS = "member"
@property
def payload_class(self):
return self.PAYLOAD_CLASS
def batch_update(self, old_ids, new_ids, updated_models):
"""sends an update message to the controller via oslo.messaging
:param old_ids: list of member ids that are being deleted
:param new_ids: list of member ids that are being created
:param updated_models: list of member model objects to update
"""
updated_dicts = [m.to_dict(render_unsets=False)
for m in updated_models]
kw = {"old_{0}_ids".format(self.payload_class): old_ids,
"new_{0}_ids".format(self.payload_class): new_ids,
"updated_{0}s".format(self.payload_class): updated_dicts}
method_name = "batch_update_{0}s".format(self.payload_class)
self.client.cast({}, method_name, **kw)
class L7PolicyProducer(BaseProducer):
"""Sends updates,deletes and creates to the RPC end of the queue consumer
"""
PAYLOAD_CLASS = "l7policy"
@property
def payload_class(self):
return self.PAYLOAD_CLASS
class L7RuleProducer(BaseProducer):
"""Sends updates,deletes and creates to the RPC end of the queue consumer
"""
PAYLOAD_CLASS = "l7rule"
@property
def payload_class(self):
return self.PAYLOAD_CLASS
class ProducerHandler(abstract_handler.BaseHandler):
"""Base class for all QueueProducers.
used to send messages via the Class variables load_balancer, listener,
health_monitor, member, l7policy and l7rule.
"""
def __init__(self):
self.load_balancer = LoadBalancerProducer()
self.listener = ListenerProducer()
self.pool = PoolProducer()
self.health_monitor = HealthMonitorProducer()
self.member = MemberProducer()
self.l7policy = L7PolicyProducer()
self.l7rule = L7RuleProducer()
self.amphora = AmphoraProducer()

View File

@ -12,18 +12,15 @@
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_log import log as logging
from pecan import request as pecan_request
from pecan import rest
from wsme import types as wtypes
from wsmeext import pecan as wsme_pecan
from octavia.api.v1 import controllers as v1_controller
from octavia.api.v2 import controllers as v2_controller
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
@ -32,20 +29,8 @@ class RootController(rest.RestController):
def __init__(self):
super(RootController, self).__init__()
v1_enabled = CONF.api_settings.api_v1_enabled
v2_enabled = CONF.api_settings.api_v2_enabled
if v1_enabled:
self.v1 = v1_controller.V1Controller()
if v2_enabled:
setattr(self, 'v2.0', v2_controller.V2Controller())
setattr(self, 'v2', v2_controller.V2Controller())
if not (v1_enabled or v2_enabled):
LOG.warning("Both v1 and v2 API endpoints are disabled -- is "
"this intentional?")
elif v1_enabled and v2_enabled:
LOG.warning("Both v1 and v2 API endpoints are enabled -- it is "
"a security risk to expose the v1 endpoint publicly,"
"so please make sure access to it is secured.")
setattr(self, 'v2.0', v2_controller.V2Controller())
setattr(self, 'v2', v2_controller.V2Controller())
def _add_a_version(self, versions, version, url_version, status,
timestamp, base_url):
@ -67,37 +52,33 @@ class RootController(rest.RestController):
host_url = '{}/'.format(host_url)
versions = []
if CONF.api_settings.api_v1_enabled:
self._add_a_version(versions, 'v1', 'v1', 'DEPRECATED',
'2014-12-11T00:00:00Z', host_url)
if CONF.api_settings.api_v2_enabled:
self._add_a_version(versions, 'v2.0', 'v2', 'SUPPORTED',
'2016-12-11T00:00:00Z', host_url)
self._add_a_version(versions, 'v2.1', 'v2', 'SUPPORTED',
'2018-04-20T00:00:00Z', host_url)
self._add_a_version(versions, 'v2.2', 'v2', 'SUPPORTED',
'2018-07-31T00:00:00Z', host_url)
self._add_a_version(versions, 'v2.3', 'v2', 'SUPPORTED',
'2018-12-18T00:00:00Z', host_url)
# amp statistics
self._add_a_version(versions, 'v2.4', 'v2', 'SUPPORTED',
'2018-12-19T00:00:00Z', host_url)
# Tags
self._add_a_version(versions, 'v2.5', 'v2', 'SUPPORTED',
'2019-01-21T00:00:00Z', host_url)
# Flavors
self._add_a_version(versions, 'v2.6', 'v2', 'SUPPORTED',
'2019-01-25T00:00:00Z', host_url)
# Amphora Config update
self._add_a_version(versions, 'v2.7', 'v2', 'SUPPORTED',
'2018-01-25T12:00:00Z', host_url)
# TLS client authentication
self._add_a_version(versions, 'v2.8', 'v2', 'SUPPORTED',
'2019-02-12T00:00:00Z', host_url)
# HTTP Redirect code
self._add_a_version(versions, 'v2.9', 'v2', 'SUPPORTED',
'2019-03-04T00:00:00Z', host_url)
# Healthmonitor host header
self._add_a_version(versions, 'v2.10', 'v2', 'CURRENT',
'2019-03-05T00:00:00Z', host_url)
self._add_a_version(versions, 'v2.0', 'v2', 'SUPPORTED',
'2016-12-11T00:00:00Z', host_url)
self._add_a_version(versions, 'v2.1', 'v2', 'SUPPORTED',
'2018-04-20T00:00:00Z', host_url)
self._add_a_version(versions, 'v2.2', 'v2', 'SUPPORTED',
'2018-07-31T00:00:00Z', host_url)
self._add_a_version(versions, 'v2.3', 'v2', 'SUPPORTED',
'2018-12-18T00:00:00Z', host_url)
# amp statistics
self._add_a_version(versions, 'v2.4', 'v2', 'SUPPORTED',
'2018-12-19T00:00:00Z', host_url)
# Tags
self._add_a_version(versions, 'v2.5', 'v2', 'SUPPORTED',
'2019-01-21T00:00:00Z', host_url)
# Flavors
self._add_a_version(versions, 'v2.6', 'v2', 'SUPPORTED',
'2019-01-25T00:00:00Z', host_url)
# Amphora Config update
self._add_a_version(versions, 'v2.7', 'v2', 'SUPPORTED',
'2018-01-25T12:00:00Z', host_url)
# TLS client authentication
self._add_a_version(versions, 'v2.8', 'v2', 'SUPPORTED',
'2019-02-12T00:00:00Z', host_url)
# HTTP Redirect code
self._add_a_version(versions, 'v2.9', 'v2', 'SUPPORTED',
'2019-03-04T00:00:00Z', host_url)
# Healthmonitor host header
self._add_a_version(versions, 'v2.10', 'v2', 'CURRENT',
'2019-03-05T00:00:00Z', host_url)
return {'versions': versions}

View File

@ -1,11 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,36 +0,0 @@
# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from wsme import types as wtypes
from wsmeext import pecan as wsme_pecan
from octavia.api.v1.controllers import base
from octavia.api.v1.controllers import load_balancer
from octavia.api.v1.controllers import quotas
class V1Controller(base.BaseController):
loadbalancers = None
quotas = None
def __init__(self):
super(V1Controller, self).__init__()
self.loadbalancers = load_balancer.LoadBalancersController()
self.quotas = quotas.QuotasController()
@wsme_pecan.wsexpose(wtypes.text)
def get(self):
# TODO(blogan): decide what exactly should be here, if anything
return "v1"

View File

@ -1,141 +0,0 @@
# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_log import log as logging
from pecan import rest
from stevedore import driver as stevedore_driver
from octavia.common import data_models
from octavia.common import exceptions
from octavia.db import repositories
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
class BaseController(rest.RestController):
def __init__(self):
super(BaseController, self).__init__()
self.repositories = repositories.Repositories()
self.handler = stevedore_driver.DriverManager(
namespace='octavia.api.handlers',
name=CONF.api_settings.api_handler,
invoke_on_load=True
).driver
@staticmethod
def _convert_db_to_type(db_entity, to_type, children=False):
"""Converts a data model into an Octavia WSME type
:param db_entity: data model to convert
:param to_type: converts db_entity to this time
"""
if isinstance(to_type, list):
to_type = to_type[0]
def _convert(db_obj):
return to_type.from_data_model(db_obj, children=children)
if isinstance(db_entity, list):
converted = [_convert(db_obj) for db_obj in db_entity]
else:
converted = _convert(db_entity)
return converted
@staticmethod
def _get_db_obj(session, repo, data_model, id):
"""Gets an object from the database and returns it."""
db_obj = repo.get(session, id=id)
if not db_obj:
LOG.exception('%(name)s %(id)s not found',
{'name': data_model._name(), 'id': id})
raise exceptions.NotFound(
resource=data_model._name(), id=id)
return db_obj
def _get_db_lb(self, session, id):
"""Get a load balancer from the database."""
return self._get_db_obj(session, self.repositories.load_balancer,
data_models.LoadBalancer, id)
def _get_db_listener(self, session, id):
"""Get a listener from the database."""
return self._get_db_obj(session, self.repositories.listener,
data_models.Listener, id)
def _get_db_pool(self, session, id):
"""Get a pool from the database."""
return self._get_db_obj(session, self.repositories.pool,
data_models.Pool, id)
def _get_db_member(self, session, id):
"""Get a member from the database."""
return self._get_db_obj(session, self.repositories.member,
data_models.Member, id)
def _get_db_l7policy(self, session, id):
"""Get a L7 Policy from the database."""
return self._get_db_obj(session, self.repositories.l7policy,
data_models.L7Policy, id)
def _get_db_l7rule(self, session, id):
"""Get a L7 Rule from the database."""
return self._get_db_obj(session, self.repositories.l7rule,
data_models.L7Rule, id)
def _get_default_quotas(self, project_id):
"""Gets the project's default quotas."""
quotas = data_models.Quotas(
project_id=project_id,
load_balancer=CONF.quotas.default_load_balancer_quota,
listener=CONF.quotas.default_listener_quota,
pool=CONF.quotas.default_pool_quota,
health_monitor=CONF.quotas.default_health_monitor_quota,
member=CONF.quotas.default_member_quota)
return quotas
def _get_db_quotas(self, session, project_id):
"""Gets the project's quotas from the database, or responds with the
default quotas.
"""
# At this point project_id should not ever be None or Unset
db_quotas = self.repositories.quotas.get(
session, project_id=project_id)
if not db_quotas:
LOG.debug("No custom quotas for project %s. Returning "
"defaults...", project_id)
db_quotas = self._get_default_quotas(project_id=project_id)
else:
# Fill in any that are using the configured defaults
if db_quotas.load_balancer is None:
db_quotas.load_balancer = (CONF.quotas.
default_load_balancer_quota)
if db_quotas.listener is None:
db_quotas.listener = CONF.quotas.default_listener_quota
if db_quotas.pool is None:
db_quotas.pool = CONF.quotas.default_pool_quota
if db_quotas.health_monitor is None:
db_quotas.health_monitor = (CONF.quotas.
default_health_monitor_quota)
if db_quotas.member is None:
db_quotas.member = CONF.quotas.default_member_quota
return db_quotas
def _get_lb_project_id(self, session, id):
"""Get the project_id of the load balancer from the database."""
lb = self._get_db_obj(session, self.repositories.load_balancer,
data_models.LoadBalancer, id)
return lb.project_id

View File

@ -1,200 +0,0 @@
# Copyright 2014 Rackspace
# Copyright 2016 Blue Box, an IBM Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_db import exception as odb_exceptions
from oslo_log import log as logging
from oslo_utils import excutils
import pecan
from wsmeext import pecan as wsme_pecan
from octavia.api.v1.controllers import base
from octavia.api.v1.types import health_monitor as hm_types
from octavia.common import constants
from octavia.common import data_models
from octavia.common import exceptions
from octavia.db import api as db_api
from octavia.db import prepare as db_prepare
LOG = logging.getLogger(__name__)
class HealthMonitorController(base.BaseController):
def __init__(self, load_balancer_id, pool_id, listener_id=None):
super(HealthMonitorController, self).__init__()
self.load_balancer_id = load_balancer_id
self.listener_id = listener_id
self.pool_id = pool_id
self.handler = self.handler.health_monitor
def _get_db_hm(self, session):
"""Gets the current health monitor object from the database."""
db_hm = self.repositories.health_monitor.get(
session, pool_id=self.pool_id)
if not db_hm:
LOG.info("Health Monitor for Pool %s was not found",
self.pool_id)
raise exceptions.NotFound(
resource=data_models.HealthMonitor._name(),
id=self.pool_id)
return db_hm
@wsme_pecan.wsexpose(hm_types.HealthMonitorResponse)
def get_all(self):
"""Gets a single health monitor's details."""
# NOTE(blogan): since a pool can only have one health monitor
# we are using the get_all method to only get the single health monitor
context = pecan.request.context.get('octavia_context')
db_hm = self._get_db_hm(context.session)
return self._convert_db_to_type(db_hm, hm_types.HealthMonitorResponse)
def _get_affected_listener_ids(self, session, hm=None):
"""Gets a list of all listeners this request potentially affects."""
listener_ids = []
if hm:
listener_ids = [l.id for l in hm.pool.listeners]
else:
pool = self._get_db_pool(session, self.pool_id)
listener_ids = [listener.id for listener in pool.listeners]
if self.listener_id and self.listener_id not in listener_ids:
listener_ids.append(self.listener_id)
return listener_ids
def _test_lb_and_listener_statuses(self, session, hm=None):
"""Verify load balancer is in a mutable state."""
# We need to verify that any listeners referencing this pool are also
# mutable
if not self.repositories.test_and_set_lb_and_listeners_prov_status(
session, self.load_balancer_id,
constants.PENDING_UPDATE, constants.PENDING_UPDATE,
listener_ids=self._get_affected_listener_ids(session, hm)):
LOG.info("Health Monitor cannot be created or modified "
"because the Load Balancer is in an immutable state")
lb_repo = self.repositories.load_balancer
db_lb = lb_repo.get(session, id=self.load_balancer_id)
raise exceptions.ImmutableObject(resource=db_lb._name(),
id=self.load_balancer_id)
@wsme_pecan.wsexpose(hm_types.HealthMonitorResponse,
body=hm_types.HealthMonitorPOST, status_code=202)
def post(self, health_monitor):
"""Creates a health monitor on a pool."""
context = pecan.request.context.get('octavia_context')
health_monitor.project_id = self._get_lb_project_id(
context.session, self.load_balancer_id)
try:
db_hm = self.repositories.health_monitor.get(
context.session, pool_id=self.pool_id)
if db_hm:
raise exceptions.DuplicateHealthMonitor()
except exceptions.NotFound:
pass
lock_session = db_api.get_session(autocommit=False)
if self.repositories.check_quota_met(
context.session,
lock_session,
data_models.HealthMonitor,
health_monitor.project_id):
lock_session.rollback()
raise exceptions.QuotaException(
resource=data_models.HealthMonitor._name()
)
try:
hm_dict = db_prepare.create_health_monitor(
health_monitor.to_dict(render_unsets=True), self.pool_id)
self._test_lb_and_listener_statuses(lock_session)
db_hm = self.repositories.health_monitor.create(lock_session,
**hm_dict)
db_new_hm = self._get_db_hm(lock_session)
lock_session.commit()
except odb_exceptions.DBError:
lock_session.rollback()
raise exceptions.InvalidOption(value=hm_dict.get('type'),
option='type')
except Exception:
with excutils.save_and_reraise_exception():
lock_session.rollback()
try:
LOG.info("Sending Creation of Health Monitor for Pool %s to "
"handler", self.pool_id)
self.handler.create(db_hm)
except Exception:
for listener_id in self._get_affected_listener_ids(
context.session):
with excutils.save_and_reraise_exception(reraise=False):
self.repositories.listener.update(
context.session, listener_id,
operating_status=constants.ERROR)
return self._convert_db_to_type(db_new_hm,
hm_types.HealthMonitorResponse)
@wsme_pecan.wsexpose(hm_types.HealthMonitorResponse,
body=hm_types.HealthMonitorPUT, status_code=202)
def put(self, health_monitor):
"""Updates a health monitor.
Updates a health monitor on a pool if it exists. Only one health
monitor is allowed per pool so there is no need for a health monitor
id.
"""
context = pecan.request.context.get('octavia_context')
db_hm = self._get_db_hm(context.session)
self._test_lb_and_listener_statuses(context.session, hm=db_hm)
self.repositories.health_monitor.update(
context.session, db_hm.id,
provisioning_status=constants.PENDING_UPDATE)
try:
LOG.info("Sending Update of Health Monitor for Pool %s to handler",
self.pool_id)
self.handler.update(db_hm, health_monitor)
except Exception:
with excutils.save_and_reraise_exception(reraise=False):
for listener_id in self._get_affected_listener_ids(
context.session, db_hm):
self.repositories.listener.update(
context.session, listener_id,
operating_status=constants.ERROR)
db_hm = self._get_db_hm(context.session)
return self._convert_db_to_type(db_hm, hm_types.HealthMonitorResponse)
@wsme_pecan.wsexpose(None, status_code=202)
def delete(self):
"""Deletes a health monitor."""
context = pecan.request.context.get('octavia_context')
db_hm = self._get_db_hm(context.session)
self._test_lb_and_listener_statuses(context.session, hm=db_hm)
try:
LOG.info("Sending Deletion of Health Monitor for Pool %s to "
"handler", self.pool_id)
self.handler.delete(db_hm)
except Exception:
with excutils.save_and_reraise_exception(reraise=False):
for listener_id in self._get_affected_listener_ids(
context.session, db_hm):
self.repositories.listener.update(
context.session, listener_id,
operating_status=constants.ERROR)
db_hm = self.repositories.health_monitor.get(
context.session, pool_id=self.pool_id)
return self._convert_db_to_type(db_hm, hm_types.HealthMonitorResponse)

View File

@ -1,187 +0,0 @@
# Copyright 2016 Blue Box, an IBM Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import oslo_db.exception as oslo_exc
from oslo_log import log as logging
from oslo_utils import excutils
import pecan
from wsme import types as wtypes
from wsmeext import pecan as wsme_pecan
from octavia.api.v1.controllers import base
from octavia.api.v1.controllers import l7rule
from octavia.api.v1.types import l7policy as l7policy_types
from octavia.common import constants
from octavia.common import data_models
from octavia.common import exceptions
from octavia.common import validate
from octavia.db import prepare as db_prepare
LOG = logging.getLogger(__name__)
class L7PolicyController(base.BaseController):
def __init__(self, load_balancer_id, listener_id):
super(L7PolicyController, self).__init__()
self.load_balancer_id = load_balancer_id
self.listener_id = listener_id
self.handler = self.handler.l7policy
@wsme_pecan.wsexpose(l7policy_types.L7PolicyResponse, wtypes.text)
def get(self, id):
"""Gets a single l7policy's details."""
context = pecan.request.context.get('octavia_context')
db_l7policy = self._get_db_l7policy(context.session, id)
return self._convert_db_to_type(db_l7policy,
l7policy_types.L7PolicyResponse)
@wsme_pecan.wsexpose([l7policy_types.L7PolicyResponse])
def get_all(self):
"""Lists all l7policies of a listener."""
context = pecan.request.context.get('octavia_context')
db_l7policies, _ = self.repositories.l7policy.get_all(
context.session, listener_id=self.listener_id)
return self._convert_db_to_type(db_l7policies,
[l7policy_types.L7PolicyResponse])
def _test_lb_and_listener_statuses(self, session):
"""Verify load balancer is in a mutable state."""
if not self.repositories.test_and_set_lb_and_listeners_prov_status(
session, self.load_balancer_id,
constants.PENDING_UPDATE, constants.PENDING_UPDATE,
listener_ids=[self.listener_id]):
LOG.info("L7Policy cannot be created or modified because the "
"Load Balancer is in an immutable state")
lb_repo = self.repositories.load_balancer
db_lb = lb_repo.get(session, id=self.load_balancer_id)
raise exceptions.ImmutableObject(resource=db_lb._name(),
id=self.load_balancer_id)
@wsme_pecan.wsexpose(l7policy_types.L7PolicyResponse,
body=l7policy_types.L7PolicyPOST, status_code=202)
def post(self, l7policy):
"""Creates a l7policy on a listener."""
context = pecan.request.context.get('octavia_context')
l7policy_dict = validate.sanitize_l7policy_api_args(
l7policy.to_dict(render_unsets=True), create=True)
# Make sure any pool specified by redirect_pool_id exists
if l7policy_dict.get('redirect_pool_id'):
self._get_db_pool(
context.session, l7policy_dict['redirect_pool_id'])
l7policy_dict = db_prepare.create_l7policy(l7policy_dict,
self.load_balancer_id,
self.listener_id)
self._test_lb_and_listener_statuses(context.session)
try:
db_l7policy = self.repositories.l7policy.create(context.session,
**l7policy_dict)
except oslo_exc.DBDuplicateEntry as de:
# Setting LB and Listener back to active because this is just a
# validation failure
self.repositories.load_balancer.update(
context.session, self.load_balancer_id,
provisioning_status=constants.ACTIVE)
self.repositories.listener.update(
context.session, self.listener_id,
provisioning_status=constants.ACTIVE)
if ['id'] == de.columns:
raise exceptions.IDAlreadyExists()
try:
LOG.info("Sending Creation of L7Policy %s to handler",
db_l7policy.id)
self.handler.create(db_l7policy)
except Exception:
with excutils.save_and_reraise_exception(reraise=False):
self.repositories.listener.update(
context.session, self.listener_id,
operating_status=constants.ERROR)
db_l7policy = self._get_db_l7policy(context.session, db_l7policy.id)
return self._convert_db_to_type(db_l7policy,
l7policy_types.L7PolicyResponse)
@wsme_pecan.wsexpose(l7policy_types.L7PolicyResponse,
wtypes.text, body=l7policy_types.L7PolicyPUT,
status_code=202)
def put(self, id, l7policy):
"""Updates a l7policy."""
l7policy_dict = validate.sanitize_l7policy_api_args(
l7policy.to_dict(render_unsets=False))
context = pecan.request.context.get('octavia_context')
# Make sure any specified redirect_pool_id exists
if l7policy_dict.get('redirect_pool_id'):
self._get_db_pool(
context.session, l7policy_dict['redirect_pool_id'])
db_l7policy = self._get_db_l7policy(context.session, id)
self._test_lb_and_listener_statuses(context.session)
self.repositories.l7policy.update(
context.session, id, provisioning_status=constants.PENDING_UPDATE)
try:
LOG.info("Sending Update of L7Policy %s to handler", id)
self.handler.update(
db_l7policy, l7policy_types.L7PolicyPUT(**l7policy_dict))
except Exception:
with excutils.save_and_reraise_exception(reraise=False):
self.repositories.listener.update(
context.session, self.listener_id,
operating_status=constants.ERROR)
db_l7policy = self._get_db_l7policy(context.session, id)
return self._convert_db_to_type(db_l7policy,
l7policy_types.L7PolicyResponse)
@wsme_pecan.wsexpose(None, wtypes.text, status_code=202)
def delete(self, id):
"""Deletes a l7policy."""
context = pecan.request.context.get('octavia_context')
db_l7policy = self._get_db_l7policy(context.session, id)
self._test_lb_and_listener_statuses(context.session)
try:
LOG.info("Sending Deletion of L7Policy %s to handler",
db_l7policy.id)
self.handler.delete(db_l7policy)
except Exception:
with excutils.save_and_reraise_exception(reraise=False):
self.repositories.listener.update(
context.session, self.listener_id,
operating_status=constants.ERROR)
db_l7policy = self.repositories.l7policy.get(context.session, id=id)
return self._convert_db_to_type(db_l7policy,
l7policy_types.L7PolicyResponse)
@pecan.expose()
def _lookup(self, l7policy_id, *remainder):
"""Overridden pecan _lookup method for custom routing.
Verifies that the l7policy passed in the url exists, and if so decides
which controller, if any, should control be passed.
"""
context = pecan.request.context.get('octavia_context')
if l7policy_id and remainder and remainder[0] == 'l7rules':
remainder = remainder[1:]
db_l7policy = self.repositories.l7policy.get(
context.session, id=l7policy_id)
if not db_l7policy:
LOG.info("L7Policy %s not found.", l7policy_id)
raise exceptions.NotFound(
resource=data_models.L7Policy._name(), id=l7policy_id)
return l7rule.L7RuleController(
load_balancer_id=self.load_balancer_id,
listener_id=self.listener_id,
l7policy_id=db_l7policy.id), remainder
return None

View File

@ -1,170 +0,0 @@
# Copyright 2016 Blue Box, an IBM Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import oslo_db.exception as oslo_exc
from oslo_log import log as logging
from oslo_utils import excutils
import pecan
from wsme import types as wtypes
from wsmeext import pecan as wsme_pecan
from octavia.api.v1.controllers import base
from octavia.api.v1.types import l7rule as l7rule_types
from octavia.common import constants
from octavia.common import data_models
from octavia.common import exceptions
from octavia.common import validate
from octavia.db import prepare as db_prepare
LOG = logging.getLogger(__name__)
class L7RuleController(base.BaseController):
def __init__(self, load_balancer_id, listener_id, l7policy_id):
super(L7RuleController, self).__init__()
self.load_balancer_id = load_balancer_id
self.listener_id = listener_id
self.l7policy_id = l7policy_id
self.handler = self.handler.l7rule
@wsme_pecan.wsexpose(l7rule_types.L7RuleResponse, wtypes.text)
def get(self, id):
"""Gets a single l7rule's details."""
context = pecan.request.context.get('octavia_context')
db_l7rule = self._get_db_l7rule(context.session, id)
return self._convert_db_to_type(db_l7rule,
l7rule_types.L7RuleResponse)
@wsme_pecan.wsexpose([l7rule_types.L7RuleResponse])
def get_all(self):
"""Lists all l7rules of a l7policy."""
context = pecan.request.context.get('octavia_context')
db_l7rules, _ = self.repositories.l7rule.get_all(
context.session, l7policy_id=self.l7policy_id)
return self._convert_db_to_type(db_l7rules,
[l7rule_types.L7RuleResponse])
def _test_lb_and_listener_statuses(self, session):
"""Verify load balancer is in a mutable state."""
if not self.repositories.test_and_set_lb_and_listeners_prov_status(
session, self.load_balancer_id,
constants.PENDING_UPDATE, constants.PENDING_UPDATE,
listener_ids=[self.listener_id]):
LOG.info("L7Rule cannot be created or modified because the "
"Load Balancer is in an immutable state")
lb_repo = self.repositories.load_balancer
db_lb = lb_repo.get(session, id=self.load_balancer_id)
raise exceptions.ImmutableObject(resource=db_lb._name(),
id=self.load_balancer_id)
def _check_l7policy_max_rules(self, session):
"""Checks to make sure the L7Policy doesn't have too many rules."""
count = self.repositories.l7rule.count(
session, l7policy_id=self.l7policy_id)
if count >= constants.MAX_L7RULES_PER_L7POLICY:
raise exceptions.TooManyL7RulesOnL7Policy(id=self.l7policy_id)
@wsme_pecan.wsexpose(l7rule_types.L7RuleResponse,
body=l7rule_types.L7RulePOST, status_code=202)
def post(self, l7rule):
"""Creates a l7rule on an l7policy."""
try:
validate.l7rule_data(l7rule)
except Exception as e:
raise exceptions.L7RuleValidation(error=e)
context = pecan.request.context.get('octavia_context')
self._check_l7policy_max_rules(context.session)
l7rule_dict = db_prepare.create_l7rule(
l7rule.to_dict(render_unsets=True), self.l7policy_id)
self._test_lb_and_listener_statuses(context.session)
try:
db_l7rule = self.repositories.l7rule.create(context.session,
**l7rule_dict)
except oslo_exc.DBDuplicateEntry as de:
# Setting LB and Listener back to active because this is just a
# validation failure
self.repositories.load_balancer.update(
context.session, self.load_balancer_id,
provisioning_status=constants.ACTIVE)
self.repositories.listener.update(
context.session, self.listener_id,
provisioning_status=constants.ACTIVE)
if ['id'] == de.columns:
raise exceptions.IDAlreadyExists()
try:
LOG.info("Sending Creation of L7Rule %s to handler",
db_l7rule.id)
self.handler.create(db_l7rule)
except Exception:
with excutils.save_and_reraise_exception(reraise=False):
self.repositories.listener.update(
context.session, self.listener_id,
operating_status=constants.ERROR)
db_l7rule = self._get_db_l7rule(context.session, db_l7rule.id)
return self._convert_db_to_type(db_l7rule,
l7rule_types.L7RuleResponse)
@wsme_pecan.wsexpose(l7rule_types.L7RuleResponse,
wtypes.text, body=l7rule_types.L7RulePUT,
status_code=202)
def put(self, id, l7rule):
"""Updates a l7rule."""
context = pecan.request.context.get('octavia_context')
db_l7rule = self._get_db_l7rule(context.session, id)
new_l7rule = db_l7rule.to_dict()
new_l7rule.update(l7rule.to_dict())
new_l7rule = data_models.L7Rule.from_dict(new_l7rule)
try:
validate.l7rule_data(new_l7rule)
except Exception as e:
raise exceptions.L7RuleValidation(error=e)
self._test_lb_and_listener_statuses(context.session)
self.repositories.l7rule.update(
context.session, id, provisioning_status=constants.PENDING_UPDATE)
try:
LOG.info("Sending Update of L7Rule %s to handler", id)
self.handler.update(db_l7rule, l7rule)
except Exception:
with excutils.save_and_reraise_exception(reraise=False):
self.repositories.listener.update(
context.session, self.listener_id,
operating_status=constants.ERROR)
db_l7rule = self._get_db_l7rule(context.session, id)
return self._convert_db_to_type(db_l7rule,
l7rule_types.L7RuleResponse)
@wsme_pecan.wsexpose(None, wtypes.text, status_code=202)
def delete(self, id):
"""Deletes a l7rule."""
context = pecan.request.context.get('octavia_context')
db_l7rule = self._get_db_l7rule(context.session, id)
self._test_lb_and_listener_statuses(context.session)
try:
LOG.info("Sending Deletion of L7Rule %s to handler",
db_l7rule.id)
self.handler.delete(db_l7rule)
except Exception:
with excutils.save_and_reraise_exception(reraise=False):
self.repositories.listener.update(
context.session, self.listener_id,
operating_status=constants.ERROR)
db_l7rule = self.repositories.l7rule.get(context.session, id=id)
return self._convert_db_to_type(db_l7rule,
l7rule_types.L7RuleResponse)

View File

@ -1,282 +0,0 @@
# Copyright 2014 Rackspace
# Copyright 2016 Blue Box, an IBM Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_db import exception as odb_exceptions
from oslo_log import log as logging
from oslo_utils import excutils
import pecan
from wsme import types as wtypes
from wsmeext import pecan as wsme_pecan
from octavia.api.v1.controllers import base
from octavia.api.v1.controllers import l7policy
from octavia.api.v1.controllers import listener_statistics
from octavia.api.v1.controllers import pool
from octavia.api.v1.types import listener as listener_types
from octavia.common import constants
from octavia.common import data_models
from octavia.common import exceptions
from octavia.db import api as db_api
from octavia.db import prepare as db_prepare
LOG = logging.getLogger(__name__)
class ListenersController(base.BaseController):
def __init__(self, load_balancer_id):
super(ListenersController, self).__init__()
self.load_balancer_id = load_balancer_id
self.handler = self.handler.listener
@staticmethod
def _secure_data(listener):
# TODO(blogan): Handle this data when certificate management code is
# available
listener.tls_termination = wtypes.Unset
def _get_db_listener(self, session, id):
"""Gets a listener object from the database."""
db_listener = self.repositories.listener.get(
session, load_balancer_id=self.load_balancer_id, id=id)
if not db_listener:
LOG.info("Listener %s not found.", id)
raise exceptions.NotFound(
resource=data_models.Listener._name(), id=id)
return db_listener
@wsme_pecan.wsexpose(listener_types.ListenerResponse, wtypes.text)
def get_one(self, id):
"""Gets a single listener's details."""
context = pecan.request.context.get('octavia_context')
db_listener = self._get_db_listener(context.session, id)
return self._convert_db_to_type(db_listener,
listener_types.ListenerResponse)
@wsme_pecan.wsexpose([listener_types.ListenerResponse],
ignore_extra_args=True)
def get_all(self):
"""Lists all listeners on a load balancer."""
context = pecan.request.context.get('octavia_context')
pcontext = pecan.request.context
db_listeners, _ = self.repositories.listener.get_all(
context.session,
pagination_helper=pcontext.get(constants.PAGINATION_HELPER),
load_balancer_id=self.load_balancer_id)
return self._convert_db_to_type(db_listeners,
[listener_types.ListenerResponse])
def _test_lb_and_listener_statuses(
self, session, id=None, listener_status=constants.PENDING_UPDATE):
"""Verify load balancer is in a mutable state."""
lb_repo = self.repositories.load_balancer
if id:
if not self.repositories.test_and_set_lb_and_listeners_prov_status(
session, self.load_balancer_id, constants.PENDING_UPDATE,
listener_status, listener_ids=[id]):
LOG.info("Load Balancer %s is immutable.",
self.load_balancer_id)
db_lb = lb_repo.get(session, id=self.load_balancer_id)
raise exceptions.ImmutableObject(resource=db_lb._name(),
id=self.load_balancer_id)
else:
if not lb_repo.test_and_set_provisioning_status(
session, self.load_balancer_id, constants.PENDING_UPDATE):
db_lb = lb_repo.get(session, id=self.load_balancer_id)
LOG.info("Load Balancer %s is immutable.", db_lb.id)
raise exceptions.ImmutableObject(resource=db_lb._name(),
id=self.load_balancer_id)
def _validate_pool(self, session, pool_id):
"""Validate pool given exists on same load balancer as listener."""
db_pool = self.repositories.pool.get(
session, load_balancer_id=self.load_balancer_id, id=pool_id)
if not db_pool:
raise exceptions.NotFound(
resource=data_models.Pool._name(), id=pool_id)
def _validate_listener(self, lock_session, listener_dict):
"""Validate listener for wrong protocol or duplicate listeners
Update the load balancer db when provisioning status changes.
"""
if (listener_dict and
listener_dict.get('insert_headers') and
list(set(listener_dict['insert_headers'].keys()) -
set(constants.SUPPORTED_HTTP_HEADERS))):
raise exceptions.InvalidOption(
value=listener_dict.get('insert_headers'),
option='insert_headers')
try:
sni_containers = listener_dict.pop('sni_containers', [])
db_listener = self.repositories.listener.create(
lock_session, **listener_dict)
if sni_containers:
for container in sni_containers:
sni_dict = {'listener_id': db_listener.id,
'tls_container_id': container.get(
'tls_container_id')}
self.repositories.sni.create(lock_session, **sni_dict)
db_listener = self.repositories.listener.get(lock_session,
id=db_listener.id)
return db_listener
except odb_exceptions.DBDuplicateEntry as de:
if ['id'] == de.columns:
raise exceptions.IDAlreadyExists()
elif set(['load_balancer_id', 'protocol_port']) == set(de.columns):
raise exceptions.DuplicateListenerEntry(
port=listener_dict.get('protocol_port'))
except odb_exceptions.DBError:
raise exceptions.InvalidOption(value=listener_dict.get('protocol'),
option='protocol')
def _send_listener_to_handler(self, session, db_listener):
try:
LOG.info("Sending Creation of Listener %s to handler",
db_listener.id)
self.handler.create(db_listener)
except Exception:
with excutils.save_and_reraise_exception(reraise=False):
self.repositories.listener.update(
session, db_listener.id,
provisioning_status=constants.ERROR)
db_listener = self._get_db_listener(session, db_listener.id)
return self._convert_db_to_type(db_listener,
listener_types.ListenerResponse)
@wsme_pecan.wsexpose(listener_types.ListenerResponse,
body=listener_types.ListenerPOST, status_code=202)
def post(self, listener):
"""Creates a listener on a load balancer."""
context = pecan.request.context.get('octavia_context')
listener.project_id = self._get_lb_project_id(context.session,
self.load_balancer_id)
lock_session = db_api.get_session(autocommit=False)
if self.repositories.check_quota_met(
context.session,
lock_session,
data_models.Listener,
listener.project_id):
lock_session.rollback()
raise exceptions.QuotaException(
resource=data_models.Listener._name())
try:
self._secure_data(listener)
listener_dict = db_prepare.create_listener(
listener.to_dict(render_unsets=True), self.load_balancer_id)
if listener_dict['default_pool_id']:
self._validate_pool(lock_session,
listener_dict['default_pool_id'])
self._test_lb_and_listener_statuses(lock_session)
# NOTE(blogan): Throwing away because we should not store
# secure data in the database nor should we send it to a handler.
if 'tls_termination' in listener_dict:
del listener_dict['tls_termination']
# This is the extra validation layer for wrong protocol or
# duplicate listeners on the same load balancer.
db_listener = self._validate_listener(lock_session, listener_dict)
lock_session.commit()
except Exception:
with excutils.save_and_reraise_exception():
lock_session.rollback()
return self._send_listener_to_handler(context.session, db_listener)
@wsme_pecan.wsexpose(listener_types.ListenerResponse, wtypes.text,
body=listener_types.ListenerPUT, status_code=202)
def put(self, id, listener):
"""Updates a listener on a load balancer."""
self._secure_data(listener)
context = pecan.request.context.get('octavia_context')
db_listener = self._get_db_listener(context.session, id)
listener_dict = listener.to_dict()
if listener_dict.get('default_pool_id'):
self._validate_pool(context.session,
listener_dict['default_pool_id'])
self._test_lb_and_listener_statuses(context.session, id=id)
try:
LOG.info("Sending Update of Listener %s to handler", id)
self.handler.update(db_listener, listener)
except Exception:
with excutils.save_and_reraise_exception(reraise=False):
self.repositories.listener.update(
context.session, id, provisioning_status=constants.ERROR)
db_listener = self._get_db_listener(context.session, id)
return self._convert_db_to_type(db_listener,
listener_types.ListenerResponse)
@wsme_pecan.wsexpose(None, wtypes.text, status_code=202)
def delete(self, id):
"""Deletes a listener from a load balancer."""
context = pecan.request.context.get('octavia_context')
db_listener = self._get_db_listener(context.session, id)
self._test_lb_and_listener_statuses(
context.session, id=id, listener_status=constants.PENDING_DELETE)
try:
LOG.info("Sending Deletion of Listener %s to handler",
db_listener.id)
self.handler.delete(db_listener)
except Exception:
with excutils.save_and_reraise_exception(reraise=False):
self.repositories.listener.update(
context.session, db_listener.id,
provisioning_status=constants.ERROR)
db_listener = self.repositories.listener.get(
context.session, id=db_listener.id)
return self._convert_db_to_type(db_listener,
listener_types.ListenerResponse)
@pecan.expose()
def _lookup(self, listener_id, *remainder):
"""Overridden pecan _lookup method for custom routing.
Verifies that the listener passed in the url exists, and if so decides
which controller, if any, should control be passed.
"""
context = pecan.request.context.get('octavia_context')
is_children = (
listener_id and remainder and (
remainder[0] == 'pools' or (
remainder[0] == 'l7policies' or remainder[0] == 'stats'
)
)
)
if is_children:
controller = remainder[0]
remainder = remainder[1:]
db_listener = self.repositories.listener.get(
context.session, id=listener_id)
if not db_listener:
LOG.info("Listener %s not found.", listener_id)
raise exceptions.NotFound(
resource=data_models.Listener._name(), id=listener_id)
if controller == 'pools':
return pool.PoolsController(
load_balancer_id=self.load_balancer_id,
listener_id=db_listener.id), remainder
elif controller == 'l7policies':
return l7policy.L7PolicyController(
load_balancer_id=self.load_balancer_id,
listener_id=db_listener.id), remainder
elif controller == 'stats':
return listener_statistics.ListenerStatisticsController(
listener_id=db_listener.id), remainder
return None

View File

@ -1,42 +0,0 @@
# Copyright 2016 Blue Box, an IBM Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from wsme import types as wtypes
from wsmeext import pecan as wsme_pecan
from octavia.api.v1.controllers import base
from octavia.api.v1.types import listener_statistics as ls_types
from octavia.common import constants
from octavia.common import stats
class ListenerStatisticsController(base.BaseController,
stats.StatsMixin):
def __init__(self, listener_id):
super(ListenerStatisticsController, self).__init__()
self.listener_id = listener_id
@wsme_pecan.wsexpose({wtypes.text: ls_types.ListenerStatisticsResponse})
def get_all(self):
"""Gets a single listener's statistics details."""
# NOTE(sbalukoff): since a listener can only have one set of
# listener statistics we are using the get_all method to only get
# the single set of stats
context = pecan.request.context.get('octavia_context')
data_stats = self.get_listener_stats(
context.session, self.listener_id)
return {constants.LISTENER: self._convert_db_to_type(
data_stats, ls_types.ListenerStatisticsResponse)}

View File

@ -1,306 +0,0 @@
# Copyright 2014 Rackspace
# Copyright 2016 Blue Box, an IBM Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_db import exception as odb_exceptions
from oslo_log import log as logging
from oslo_utils import excutils
import pecan
from wsme import types as wtypes
from wsmeext import pecan as wsme_pecan
from octavia.api.v1.controllers import base
from octavia.api.v1.controllers import listener
from octavia.api.v1.controllers import load_balancer_statistics as lb_stats
from octavia.api.v1.controllers import pool
from octavia.api.v1.types import load_balancer as lb_types
from octavia.common import constants
from octavia.common import data_models
from octavia.common import exceptions
from octavia.common import utils
import octavia.common.validate as validate
from octavia.db import api as db_api
from octavia.db import prepare as db_prepare
from octavia.i18n import _
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
class LoadBalancersController(base.BaseController):
def __init__(self):
super(LoadBalancersController, self).__init__()
self.handler = self.handler.load_balancer
@wsme_pecan.wsexpose(lb_types.LoadBalancerResponse, wtypes.text)
def get_one(self, id):
"""Gets a single load balancer's details."""
context = pecan.request.context.get('octavia_context')
load_balancer = self._get_db_lb(context.session, id)
return self._convert_db_to_type(load_balancer,
lb_types.LoadBalancerResponse)
@wsme_pecan.wsexpose([lb_types.LoadBalancerResponse], wtypes.text,
wtypes.text, ignore_extra_args=True)
def get_all(self, tenant_id=None, project_id=None):
"""Lists all load balancers."""
# NOTE(blogan): tenant_id and project_id are optional query parameters
# tenant_id and project_id are the same thing. tenant_id will be kept
# around for a short amount of time.
pcontext = pecan.request.context
context = pcontext.get('octavia_context')
project_id = context.project_id or project_id or tenant_id
load_balancers, _ = self.repositories.load_balancer.get_all(
context.session,
pagination_helper=pcontext.get(constants.PAGINATION_HELPER),
project_id=project_id)
return self._convert_db_to_type(load_balancers,
[lb_types.LoadBalancerResponse])
def _test_lb_status(self, session, id, lb_status=constants.PENDING_UPDATE):
"""Verify load balancer is in a mutable state."""
lb_repo = self.repositories.load_balancer
if not lb_repo.test_and_set_provisioning_status(
session, id, lb_status):
LOG.info("Load Balancer %s is immutable.", id)
db_lb = lb_repo.get(session, id=id)
raise exceptions.ImmutableObject(resource=db_lb._name(),
id=id)
def _create_load_balancer_graph_db(self, session,
lock_session, load_balancer):
prepped_lb = db_prepare.create_load_balancer_tree(
load_balancer.to_dict(render_unsets=True))
try:
db_lb = self.repositories.create_load_balancer_tree(
session, lock_session, prepped_lb)
except Exception:
raise
return db_lb
def _load_balancer_graph_to_handler(self, context, db_lb):
try:
LOG.info("Sending full load balancer configuration %s to "
"the handler", db_lb.id)
self.handler.create(db_lb)
except Exception:
with excutils.save_and_reraise_exception(reraise=False):
self.repositories.load_balancer.update(
context.session, db_lb.id,
provisioning_status=constants.ERROR)
return self._convert_db_to_type(db_lb, lb_types.LoadBalancerResponse,
children=True)
@staticmethod
def _validate_network_and_fill_or_validate_subnet(load_balancer):
network = validate.network_exists_optionally_contains_subnet(
network_id=load_balancer.vip.network_id,
subnet_id=load_balancer.vip.subnet_id)
# If subnet is not provided, pick the first subnet, preferring ipv4
if not load_balancer.vip.subnet_id:
network_driver = utils.get_network_driver()
for subnet_id in network.subnets:
# Use the first subnet, in case there are no ipv4 subnets
if not load_balancer.vip.subnet_id:
load_balancer.vip.subnet_id = subnet_id
subnet = network_driver.get_subnet(subnet_id)
if subnet.ip_version == 4:
load_balancer.vip.subnet_id = subnet_id
break
if not load_balancer.vip.subnet_id:
raise exceptions.ValidationException(detail=_(
"Supplied network does not contain a subnet."
))
@wsme_pecan.wsexpose(lb_types.LoadBalancerResponse,
body=lb_types.LoadBalancerPOST, status_code=202)
def post(self, load_balancer):
"""Creates a load balancer."""
context = pecan.request.context.get('octavia_context')
project_id = context.project_id
if context.is_admin or (CONF.api_settings.auth_strategy ==
constants.NOAUTH):
if load_balancer.project_id:
project_id = load_balancer.project_id
if not project_id:
raise exceptions.ValidationException(detail=_(
"Missing project ID in request where one is required."))
load_balancer.project_id = project_id
if not (load_balancer.vip.port_id or
load_balancer.vip.network_id or
load_balancer.vip.subnet_id):
raise exceptions.ValidationException(detail=_(
"VIP must contain one of: port_id, network_id, subnet_id."))
# Validate the port id
if load_balancer.vip.port_id:
port = validate.port_exists(port_id=load_balancer.vip.port_id)
load_balancer.vip.network_id = port.network_id
# If no port id, validate the network id (and subnet if provided)
elif load_balancer.vip.network_id:
self._validate_network_and_fill_or_validate_subnet(load_balancer)
# Validate just the subnet id
elif load_balancer.vip.subnet_id:
subnet = validate.subnet_exists(
subnet_id=load_balancer.vip.subnet_id)
load_balancer.vip.network_id = subnet.network_id
lock_session = db_api.get_session(autocommit=False)
if load_balancer.listeners:
try:
db_lb = self._create_load_balancer_graph_db(context.session,
lock_session,
load_balancer)
lock_session.commit()
except Exception:
with excutils.save_and_reraise_exception():
lock_session.rollback()
return self._load_balancer_graph_to_handler(context, db_lb)
else:
if self.repositories.check_quota_met(
context.session,
lock_session,
data_models.LoadBalancer,
load_balancer.project_id):
lock_session.rollback()
raise exceptions.QuotaException(
resource=data_models.LoadBalancer._name())
try:
lb_dict = db_prepare.create_load_balancer(load_balancer.to_dict(
render_unsets=True
))
vip_dict = lb_dict.pop('vip', {})
db_lb = self.repositories.create_load_balancer_and_vip(
lock_session, lb_dict, vip_dict)
lock_session.commit()
except odb_exceptions.DBDuplicateEntry:
lock_session.rollback()
raise exceptions.IDAlreadyExists()
except Exception:
with excutils.save_and_reraise_exception():
lock_session.rollback()
# Handler will be responsible for sending to controller
try:
LOG.info("Sending created Load Balancer %s to the handler",
db_lb.id)
self.handler.create(db_lb)
except Exception:
with excutils.save_and_reraise_exception(reraise=False):
self.repositories.load_balancer.update(
context.session, db_lb.id,
provisioning_status=constants.ERROR)
return self._convert_db_to_type(db_lb, lb_types.LoadBalancerResponse)
@wsme_pecan.wsexpose(lb_types.LoadBalancerResponse,
wtypes.text, status_code=202,
body=lb_types.LoadBalancerPUT)
def put(self, id, load_balancer):
"""Updates a load balancer."""
context = pecan.request.context.get('octavia_context')
db_lb = self._get_db_lb(context.session, id)
self._test_lb_status(context.session, id)
try:
LOG.info("Sending updated Load Balancer %s to the handler",
id)
self.handler.update(db_lb, load_balancer)
except Exception:
with excutils.save_and_reraise_exception(reraise=False):
self.repositories.load_balancer.update(
context.session, id, provisioning_status=constants.ERROR)
db_lb = self._get_db_lb(context.session, id)
return self._convert_db_to_type(db_lb, lb_types.LoadBalancerResponse)
def _delete(self, id, cascade=False):
"""Deletes a load balancer."""
context = pecan.request.context.get('octavia_context')
db_lb = self._get_db_lb(context.session, id)
if (db_lb.listeners or db_lb.pools) and not cascade:
msg = _("Cannot delete Load Balancer %s - it has children") % id
LOG.warning(msg)
raise exceptions.ValidationException(detail=msg)
self._test_lb_status(context.session, id,
lb_status=constants.PENDING_DELETE)
try:
LOG.info("Sending deleted Load Balancer %s to the handler",
db_lb.id)
self.handler.delete(db_lb, cascade)
except Exception:
with excutils.save_and_reraise_exception(reraise=False):
self.repositories.load_balancer.update(
context.session, db_lb.id,
provisioning_status=constants.ERROR)
return self._convert_db_to_type(db_lb, lb_types.LoadBalancerResponse)
@wsme_pecan.wsexpose(None, wtypes.text, status_code=202)
def delete(self, id):
"""Deletes a load balancer."""
return self._delete(id, cascade=False)
@pecan.expose()
def _lookup(self, lb_id, *remainder):
"""Overridden pecan _lookup method for custom routing.
Verifies that the load balancer passed in the url exists, and if so
decides which controller, if any, should control be passed.
"""
context = pecan.request.context.get('octavia_context')
possible_remainder = ('listeners', 'pools', 'delete_cascade', 'stats')
if lb_id and remainder and (remainder[0] in possible_remainder):
controller = remainder[0]
remainder = remainder[1:]
db_lb = self.repositories.load_balancer.get(context.session,
id=lb_id)
if not db_lb:
LOG.info("Load Balancer %s was not found.", lb_id)
raise exceptions.NotFound(
resource=data_models.LoadBalancer._name(), id=lb_id)
if controller == 'listeners':
return listener.ListenersController(
load_balancer_id=db_lb.id), remainder
elif controller == 'pools':
return pool.PoolsController(
load_balancer_id=db_lb.id), remainder
elif controller == 'delete_cascade':
return LBCascadeDeleteController(db_lb.id), ''
elif controller == 'stats':
return lb_stats.LoadBalancerStatisticsController(
loadbalancer_id=db_lb.id), remainder
return None
class LBCascadeDeleteController(LoadBalancersController):
def __init__(self, lb_id):
super(LBCascadeDeleteController, self).__init__()
self.lb_id = lb_id
@wsme_pecan.wsexpose(None, status_code=202)
def delete(self):
"""Deletes a load balancer."""
return self._delete(self.lb_id, cascade=True)

View File

@ -1,40 +0,0 @@
# Copyright 2016 IBM
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from wsme import types as wtypes
from wsmeext import pecan as wsme_pecan
from octavia.api.v1.controllers import base
from octavia.api.v1.types import load_balancer_statistics as lb_types
from octavia.common import constants
from octavia.common import stats
class LoadBalancerStatisticsController(base.BaseController,
stats.StatsMixin):
def __init__(self, loadbalancer_id):
super(LoadBalancerStatisticsController, self).__init__()
self.loadbalancer_id = loadbalancer_id
@wsme_pecan.wsexpose(
{wtypes.text: lb_types.LoadBalancerStatisticsResponse})
def get(self):
"""Gets a single loadbalancer's statistics details."""
context = pecan.request.context.get('octavia_context')
data_stats = self.get_loadbalancer_stats(
context.session, self.loadbalancer_id)
return {constants.LOADBALANCER: self._convert_db_to_type(
data_stats, lb_types.LoadBalancerStatisticsResponse)}

View File

@ -1,196 +0,0 @@
# Copyright 2014 Rackspace
# Copyright 2016 Blue Box, an IBM Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import oslo_db.exception as oslo_exc
from oslo_log import log as logging
from oslo_utils import excutils
import pecan
from wsme import types as wtypes
from wsmeext import pecan as wsme_pecan
from octavia.api.v1.controllers import base
from octavia.api.v1.types import member as member_types
from octavia.common import constants
from octavia.common import data_models
from octavia.common import exceptions
import octavia.common.validate as validate
from octavia.db import api as db_api
from octavia.db import prepare as db_prepare
LOG = logging.getLogger(__name__)
class MembersController(base.BaseController):
def __init__(self, load_balancer_id, pool_id, listener_id=None):
super(MembersController, self).__init__()
self.load_balancer_id = load_balancer_id
self.listener_id = listener_id
self.pool_id = pool_id
self.handler = self.handler.member
@wsme_pecan.wsexpose(member_types.MemberResponse, wtypes.text)
def get(self, id):
"""Gets a single pool member's details."""
context = pecan.request.context.get('octavia_context')
db_member = self._get_db_member(context.session, id)
return self._convert_db_to_type(db_member, member_types.MemberResponse)
@wsme_pecan.wsexpose([member_types.MemberResponse], ignore_extra_args=True)
def get_all(self):
"""Lists all pool members of a pool."""
pcontext = pecan.request.context
context = pcontext.get('octavia_context')
db_members, _ = self.repositories.member.get_all(
context.session,
pagination_helper=pcontext.get(constants.PAGINATION_HELPER),
pool_id=self.pool_id)
return self._convert_db_to_type(db_members,
[member_types.MemberResponse])
def _get_affected_listener_ids(self, session, member=None):
"""Gets a list of all listeners this request potentially affects."""
listener_ids = []
if member:
listener_ids = [l.id for l in member.pool.listeners]
else:
pool = self._get_db_pool(session, self.pool_id)
for listener in pool.listeners:
if listener.id not in listener_ids:
listener_ids.append(listener.id)
if self.listener_id and self.listener_id not in listener_ids:
listener_ids.append(self.listener_id)
return listener_ids
def _test_lb_and_listener_statuses(self, session, member=None):
"""Verify load balancer is in a mutable state."""
# We need to verify that any listeners referencing this member's
# pool are also mutable
if not self.repositories.test_and_set_lb_and_listeners_prov_status(
session, self.load_balancer_id,
constants.PENDING_UPDATE, constants.PENDING_UPDATE,
listener_ids=self._get_affected_listener_ids(session, member)):
LOG.info("Member cannot be created or modified because the "
"Load Balancer is in an immutable state")
lb_repo = self.repositories.load_balancer
db_lb = lb_repo.get(session, id=self.load_balancer_id)
raise exceptions.ImmutableObject(resource=db_lb._name(),
id=self.load_balancer_id)
@wsme_pecan.wsexpose(member_types.MemberResponse,
body=member_types.MemberPOST, status_code=202)
def post(self, member):
"""Creates a pool member on a pool."""
context = pecan.request.context.get('octavia_context')
member.project_id = self._get_lb_project_id(context.session,
self.load_balancer_id)
# Validate member subnet
if member.subnet_id:
validate.subnet_exists(member.subnet_id)
lock_session = db_api.get_session(autocommit=False)
if self.repositories.check_quota_met(
context.session,
lock_session,
data_models.Member,
member.project_id):
lock_session.rollback()
raise exceptions.QuotaException(
resource=data_models.Member._name()
)
try:
member_dict = db_prepare.create_member(member.to_dict(
render_unsets=True), self.pool_id)
self._test_lb_and_listener_statuses(lock_session)
db_member = self.repositories.member.create(lock_session,
**member_dict)
db_new_member = self._get_db_member(lock_session, db_member.id)
lock_session.commit()
except oslo_exc.DBDuplicateEntry as de:
lock_session.rollback()
if ['id'] == de.columns:
raise exceptions.IDAlreadyExists()
raise exceptions.DuplicateMemberEntry(
ip_address=member_dict.get('ip_address'),
port=member_dict.get('protocol_port'))
except Exception:
with excutils.save_and_reraise_exception():
lock_session.rollback()
try:
LOG.info("Sending Creation of Member %s to handler",
db_member.id)
self.handler.create(db_member)
except Exception:
for listener_id in self._get_affected_listener_ids(
context.session):
with excutils.save_and_reraise_exception(reraise=False):
self.repositories.listener.update(
context.session, listener_id,
operating_status=constants.ERROR)
return self._convert_db_to_type(db_new_member,
member_types.MemberResponse)
@wsme_pecan.wsexpose(member_types.MemberResponse,
wtypes.text, body=member_types.MemberPUT,
status_code=202)
def put(self, id, member):
"""Updates a pool member."""
context = pecan.request.context.get('octavia_context')
db_member = self._get_db_member(context.session, id)
self._test_lb_and_listener_statuses(context.session, member=db_member)
self.repositories.member.update(
context.session, id, provisioning_status=constants.PENDING_UPDATE)
try:
LOG.info("Sending Update of Member %s to handler", id)
self.handler.update(db_member, member)
except Exception:
with excutils.save_and_reraise_exception(reraise=False):
for listener_id in self._get_affected_listener_ids(
context.session, db_member):
self.repositories.listener.update(
context.session, listener_id,
operating_status=constants.ERROR)
db_member = self._get_db_member(context.session, id)
return self._convert_db_to_type(db_member, member_types.MemberResponse)
@wsme_pecan.wsexpose(None, wtypes.text, status_code=202)
def delete(self, id):
"""Deletes a pool member."""
context = pecan.request.context.get('octavia_context')
db_member = self._get_db_member(context.session, id)
self._test_lb_and_listener_statuses(context.session, member=db_member)
try:
LOG.info("Sending Deletion of Member %s to handler",
db_member.id)
self.handler.delete(db_member)
except Exception:
with excutils.save_and_reraise_exception(reraise=False):
for listener_id in self._get_affected_listener_ids(
context.session, db_member):
self.repositories.listener.update(
context.session, listener_id,
operating_status=constants.ERROR)
db_member = self.repositories.member.get(context.session, id=id)
return self._convert_db_to_type(db_member, member_types.MemberResponse)

View File

@ -1,258 +0,0 @@
# Copyright 2014 Rackspace
# Copyright 2016 Blue Box, an IBM Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_db import exception as odb_exceptions
from oslo_log import log as logging
from oslo_utils import excutils
import pecan
from wsme import types as wtypes
from wsmeext import pecan as wsme_pecan
from octavia.api.v1.controllers import base
from octavia.api.v1.controllers import health_monitor
from octavia.api.v1.controllers import member
from octavia.api.v1.types import pool as pool_types
from octavia.common import constants
from octavia.common import data_models
from octavia.common import exceptions
from octavia.db import api as db_api
from octavia.db import prepare as db_prepare
LOG = logging.getLogger(__name__)
class PoolsController(base.BaseController):
def __init__(self, load_balancer_id, listener_id=None):
super(PoolsController, self).__init__()
self.load_balancer_id = load_balancer_id
self.listener_id = listener_id
self.handler = self.handler.pool
@wsme_pecan.wsexpose(pool_types.PoolResponse, wtypes.text)
def get(self, id):
"""Gets a pool's details."""
context = pecan.request.context.get('octavia_context')
db_pool = self._get_db_pool(context.session, id)
return self._convert_db_to_type(db_pool, pool_types.PoolResponse)
@wsme_pecan.wsexpose([pool_types.PoolResponse], wtypes.text,
ignore_extra_args=True)
def get_all(self, listener_id=None):
"""Lists all pools on a listener or loadbalancer."""
pcontext = pecan.request.context
context = pcontext.get('octavia_context')
if listener_id is not None:
self.listener_id = listener_id
if self.listener_id:
pools = self._get_db_listener(context.session,
self.listener_id).pools
else:
pools, _ = self.repositories.pool.get_all(
context.session,
pagination_helper=pcontext.get(constants.PAGINATION_HELPER),
load_balancer_id=self.load_balancer_id)
return self._convert_db_to_type(pools, [pool_types.PoolResponse])
def _get_affected_listener_ids(self, session, pool=None):
"""Gets a list of all listeners this request potentially affects."""
listener_ids = []
if pool:
listener_ids = [l.id for l in pool.listeners]
if self.listener_id and self.listener_id not in listener_ids:
listener_ids.append(self.listener_id)
return listener_ids
def _test_lb_and_listener_statuses(self, session, pool=None):
"""Verify load balancer is in a mutable state."""
# We need to verify that any listeners referencing this pool are also
# mutable
if not self.repositories.test_and_set_lb_and_listeners_prov_status(
session, self.load_balancer_id,
constants.PENDING_UPDATE, constants.PENDING_UPDATE,
listener_ids=self._get_affected_listener_ids(session, pool)):
LOG.info("Pool cannot be created or modified because the Load "
"Balancer is in an immutable state")
lb_repo = self.repositories.load_balancer
db_lb = lb_repo.get(session, id=self.load_balancer_id)
raise exceptions.ImmutableObject(resource=db_lb._name(),
id=self.load_balancer_id)
def _validate_create_pool(self, lock_session, pool_dict):
"""Validate creating pool on load balancer.
Update database for load balancer and (optional) listener based on
provisioning status.
"""
try:
return self.repositories.create_pool_on_load_balancer(
lock_session, pool_dict, listener_id=self.listener_id)
except odb_exceptions.DBDuplicateEntry as de:
if ['id'] == de.columns:
raise exceptions.IDAlreadyExists()
except odb_exceptions.DBError:
# TODO(blogan): will have to do separate validation protocol
# before creation or update since the exception messages
# do not give any information as to what constraint failed
raise exceptions.InvalidOption(value='', option='')
def _send_pool_to_handler(self, session, db_pool):
try:
LOG.info("Sending Creation of Pool %s to handler", db_pool.id)
self.handler.create(db_pool)
except Exception:
for listener_id in self._get_affected_listener_ids(session):
with excutils.save_and_reraise_exception(reraise=False):
self.repositories.listener.update(
session, listener_id, operating_status=constants.ERROR)
db_pool = self._get_db_pool(session, db_pool.id)
return self._convert_db_to_type(db_pool, pool_types.PoolResponse)
@wsme_pecan.wsexpose(pool_types.PoolResponse, body=pool_types.PoolPOST,
status_code=202)
def post(self, pool):
"""Creates a pool on a load balancer or listener.
Note that this can optionally take a listener_id with which the pool
should be associated as the listener's default_pool. If specified,
the pool creation will fail if the listener specified already has
a default_pool.
"""
# For some API requests the listener_id will be passed in the
# pool_dict:
context = pecan.request.context.get('octavia_context')
pool.project_id = self._get_lb_project_id(context.session,
self.load_balancer_id)
lock_session = db_api.get_session(autocommit=False)
if self.repositories.check_quota_met(
context.session,
lock_session,
data_models.Pool,
pool.project_id):
lock_session.rollback()
raise exceptions.QuotaException(
resource=data_models.Pool._name())
try:
pool_dict = db_prepare.create_pool(
pool.to_dict(render_unsets=True))
if 'listener_id' in pool_dict:
if pool_dict['listener_id'] is not None:
self.listener_id = pool_dict.pop('listener_id')
else:
del pool_dict['listener_id']
listener_repo = self.repositories.listener
if self.listener_id and listener_repo.has_default_pool(
lock_session, self.listener_id):
raise exceptions.DuplicatePoolEntry()
self._test_lb_and_listener_statuses(lock_session)
pool_dict['operating_status'] = constants.OFFLINE
pool_dict['load_balancer_id'] = self.load_balancer_id
db_pool = self._validate_create_pool(lock_session, pool_dict)
lock_session.commit()
except Exception:
with excutils.save_and_reraise_exception():
lock_session.rollback()
return self._send_pool_to_handler(context.session, db_pool)
@wsme_pecan.wsexpose(pool_types.PoolResponse, wtypes.text,
body=pool_types.PoolPUT, status_code=202)
def put(self, id, pool):
"""Updates a pool on a load balancer."""
context = pecan.request.context.get('octavia_context')
db_pool = self._get_db_pool(context.session, id)
self._test_lb_and_listener_statuses(context.session, pool=db_pool)
self.repositories.pool.update(
context.session, id, provisioning_status=constants.PENDING_UPDATE)
try:
LOG.info("Sending Update of Pool %s to handler", id)
self.handler.update(db_pool, pool)
except Exception:
with excutils.save_and_reraise_exception(reraise=False):
for listener in db_pool.listeners:
self.repositories.listener.update(
context.session, listener.id,
operating_status=constants.ERROR)
self.repositories.pool.update(
context.session, db_pool.id,
operating_status=constants.ERROR)
db_pool = self._get_db_pool(context.session, id)
return self._convert_db_to_type(db_pool, pool_types.PoolResponse)
@wsme_pecan.wsexpose(None, wtypes.text, status_code=202)
def delete(self, id):
"""Deletes a pool from a load balancer."""
context = pecan.request.context.get('octavia_context')
db_pool = self._get_db_pool(context.session, id)
if db_pool.l7policies:
raise exceptions.PoolInUseByL7Policy(
id=db_pool.id, l7policy_id=db_pool.l7policies[0].id)
self._test_lb_and_listener_statuses(context.session, pool=db_pool)
try:
LOG.info("Sending Deletion of Pool %s to handler", db_pool.id)
self.handler.delete(db_pool)
except Exception:
with excutils.save_and_reraise_exception(reraise=False):
for listener in db_pool.listeners:
self.repositories.listener.update(
context.session, listener.id,
operating_status=constants.ERROR)
self.repositories.pool.update(
context.session, db_pool.id,
operating_status=constants.ERROR)
db_pool = self.repositories.pool.get(context.session, id=db_pool.id)
return self._convert_db_to_type(db_pool, pool_types.PoolResponse)
@pecan.expose()
def _lookup(self, pool_id, *remainder):
"""Overridden pecan _lookup method for custom routing.
Verifies that the pool passed in the url exists, and if so decides
which controller, if any, should control be passed.
"""
context = pecan.request.context.get('octavia_context')
is_children = (
pool_id and remainder and (
remainder[0] == 'members' or remainder[0] == 'healthmonitor'
)
)
if is_children:
controller = remainder[0]
remainder = remainder[1:]
db_pool = self.repositories.pool.get(context.session, id=pool_id)
if not db_pool:
LOG.info("Pool %s not found.", pool_id)
raise exceptions.NotFound(resource=data_models.Pool._name(),
id=pool_id)
if controller == 'members':
return member.MembersController(
load_balancer_id=self.load_balancer_id,
pool_id=db_pool.id,
listener_id=self.listener_id), remainder
elif controller == 'healthmonitor':
return health_monitor.HealthMonitorController(
load_balancer_id=self.load_balancer_id,
pool_id=db_pool.id,
listener_id=self.listener_id), remainder
return None

View File

@ -1,101 +0,0 @@
# Copyright 2016 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
import pecan
from wsme import types as wtypes
from wsmeext import pecan as wsme_pecan
from octavia.api.v1.controllers import base
from octavia.api.v1.types import quotas as quota_types
from octavia.common import constants
from octavia.common import exceptions
CONF = cfg.CONF
CONF.import_group('quotas', 'octavia.common.config')
class QuotasController(base.BaseController):
def __init__(self):
super(QuotasController, self).__init__()
@wsme_pecan.wsexpose(quota_types.QuotaResponse, wtypes.text)
def get(self, project_id):
"""Get a single project's quota details."""
context = pecan.request.context.get('octavia_context')
db_quotas = self._get_db_quotas(context.session, project_id)
return self._convert_db_to_type(db_quotas, quota_types.QuotaResponse)
@wsme_pecan.wsexpose(quota_types.QuotaAllResponse)
def get_all(self):
"""List all non-default quotas."""
context = pecan.request.context.get('octavia_context')
db_quotas, _ = self.repositories.quotas.get_all(context.session)
quotas = quota_types.QuotaAllResponse.from_data_model(db_quotas)
return quotas
@wsme_pecan.wsexpose(quota_types.QuotaResponse, wtypes.text,
body=quota_types.QuotaPUT, status_code=202)
def put(self, project_id, quotas):
"""Update any or all quotas for a project."""
context = pecan.request.context.get('octavia_context')
new_project_id = context.project_id
if context.is_admin or (CONF.api_settings.auth_strategy ==
constants.NOAUTH):
if project_id:
new_project_id = project_id
if not new_project_id:
raise exceptions.MissingAPIProjectID()
project_id = new_project_id
quotas_dict = quotas.to_dict()
self.repositories.quotas.update(context.session, project_id,
**quotas_dict)
db_quotas = self._get_db_quotas(context.session, project_id)
return self._convert_db_to_type(db_quotas, quota_types.QuotaResponse)
@wsme_pecan.wsexpose(None, wtypes.text, status_code=202)
def delete(self, project_id):
"""Reset a project's quotas to the default values."""
context = pecan.request.context.get('octavia_context')
project_id = context.project_id or project_id
self.repositories.quotas.delete(context.session, project_id)
db_quotas = self._get_db_quotas(context.session, project_id)
return self._convert_db_to_type(db_quotas, quota_types.QuotaResponse)
@pecan.expose()
def _lookup(self, project_id, *remainder):
"""Overridden pecan _lookup method for routing default endpoint."""
if project_id and remainder and remainder[0] == 'default':
return QuotasDefaultController(project_id), ''
return None
class QuotasDefaultController(base.BaseController):
def __init__(self, project_id):
super(QuotasDefaultController, self).__init__()
self.project_id = project_id
@wsme_pecan.wsexpose(quota_types.QuotaResponse, wtypes.text)
def get(self):
"""Get a project's default quota details."""
context = pecan.request.context.get('octavia_context')
project_id = context.project_id
quotas = self._get_default_quotas(project_id)
return self._convert_db_to_type(quotas, quota_types.QuotaResponse)

View File

@ -1,11 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,66 +0,0 @@
# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from wsme import types as wtypes
from octavia.api.common import types as base
from octavia.common import constants
class HealthMonitorResponse(base.BaseType):
"""Defines which attributes are to be shown on any response."""
type = wtypes.wsattr(wtypes.text)
delay = wtypes.wsattr(wtypes.IntegerType())
timeout = wtypes.wsattr(wtypes.IntegerType())
fall_threshold = wtypes.wsattr(wtypes.IntegerType())
rise_threshold = wtypes.wsattr(wtypes.IntegerType())
http_method = wtypes.wsattr(wtypes.text)
url_path = wtypes.wsattr(wtypes.text)
expected_codes = wtypes.wsattr(wtypes.text)
enabled = wtypes.wsattr(bool)
project_id = wtypes.wsattr(wtypes.StringType())
class HealthMonitorPOST(base.BaseType):
"""Defines mandatory and optional attributes of a POST request."""
type = wtypes.wsattr(
wtypes.Enum(str, *constants.SUPPORTED_HEALTH_MONITOR_TYPES),
mandatory=True)
delay = wtypes.wsattr(wtypes.IntegerType(), mandatory=True)
timeout = wtypes.wsattr(wtypes.IntegerType(), mandatory=True)
fall_threshold = wtypes.wsattr(wtypes.IntegerType(), mandatory=True)
rise_threshold = wtypes.wsattr(wtypes.IntegerType(), mandatory=True)
http_method = wtypes.wsattr(
wtypes.text, default=constants.HEALTH_MONITOR_HTTP_DEFAULT_METHOD)
url_path = wtypes.wsattr(
wtypes.text, default=constants.HEALTH_MONITOR_DEFAULT_URL_PATH)
expected_codes = wtypes.wsattr(
wtypes.text, default=constants.HEALTH_MONITOR_DEFAULT_EXPECTED_CODES)
enabled = wtypes.wsattr(bool, default=True)
# TODO(johnsom) Remove after deprecation (R series)
project_id = wtypes.wsattr(wtypes.StringType(max_length=36))
class HealthMonitorPUT(base.BaseType):
"""Defines attributes that are acceptable of a PUT request."""
type = wtypes.wsattr(
wtypes.Enum(str, *constants.SUPPORTED_HEALTH_MONITOR_TYPES))
delay = wtypes.wsattr(wtypes.IntegerType())
timeout = wtypes.wsattr(wtypes.IntegerType())
fall_threshold = wtypes.wsattr(wtypes.IntegerType())
rise_threshold = wtypes.wsattr(wtypes.IntegerType())
http_method = wtypes.wsattr(wtypes.text)
url_path = wtypes.wsattr(wtypes.text)
expected_codes = wtypes.wsattr(wtypes.text)
enabled = wtypes.wsattr(bool)

View File

@ -1,87 +0,0 @@
# Copyright 2016 Blue Box, an IBM Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from wsme import types as wtypes
from octavia.api.common import types as base
from octavia.api.v1.types import l7rule
from octavia.api.v1.types import pool
from octavia.common import constants
class L7PolicyResponse(base.BaseType):
"""Defines which attributes are to be shown on any response."""
id = wtypes.wsattr(wtypes.UuidType())
name = wtypes.wsattr(wtypes.StringType())
description = wtypes.wsattr(wtypes.StringType())
enabled = wtypes.wsattr(bool)
action = wtypes.wsattr(wtypes.StringType())
redirect_pool_id = wtypes.wsattr(wtypes.UuidType())
redirect_url = wtypes.wsattr(wtypes.StringType())
redirect_prefix = wtypes.wsattr(wtypes.StringType())
position = wtypes.wsattr(wtypes.IntegerType())
l7rules = wtypes.wsattr([l7rule.L7RuleResponse])
redirect_pool = wtypes.wsattr(pool.PoolResponse)
@classmethod
def from_data_model(cls, data_model, children=False):
policy = super(L7PolicyResponse, cls).from_data_model(
data_model, children=children)
if not children:
del policy.l7rules
del policy.redirect_pool
return policy
policy.l7rules = [
l7rule.L7RuleResponse.from_data_model(
l7rule_dm, children=children)
for l7rule_dm in data_model.l7rules
]
if policy.redirect_pool_id:
policy.redirect_pool = pool.PoolResponse.from_data_model(
data_model.redirect_pool, children=children)
else:
del policy.redirect_pool
del policy.redirect_pool_id
return policy
class L7PolicyPOST(base.BaseType):
"""Defines mandatory and optional attributes of a POST request."""
id = wtypes.wsattr(wtypes.UuidType())
name = wtypes.wsattr(wtypes.StringType(max_length=255))
description = wtypes.wsattr(wtypes.StringType(max_length=255))
enabled = wtypes.wsattr(bool, default=True)
action = wtypes.wsattr(
wtypes.Enum(str, *constants.SUPPORTED_L7POLICY_ACTIONS),
mandatory=True)
redirect_pool_id = wtypes.wsattr(wtypes.UuidType())
redirect_url = wtypes.wsattr(base.URLType())
redirect_prefix = wtypes.wsattr(base.URLType())
position = wtypes.wsattr(wtypes.IntegerType(),
default=constants.MAX_POLICY_POSITION)
redirect_pool = wtypes.wsattr(pool.PoolPOST)
l7rules = wtypes.wsattr([l7rule.L7RulePOST], default=[])
class L7PolicyPUT(base.BaseType):
"""Defines attributes that are acceptable of a PUT request."""
name = wtypes.wsattr(wtypes.StringType(max_length=255))
description = wtypes.wsattr(wtypes.StringType(max_length=255))
enabled = wtypes.wsattr(bool)
action = wtypes.wsattr(
wtypes.Enum(str, *constants.SUPPORTED_L7POLICY_ACTIONS))
redirect_pool_id = wtypes.wsattr(wtypes.UuidType())
redirect_url = wtypes.wsattr(base.URLType())
redirect_prefix = wtypes.wsattr(base.URLType())
position = wtypes.wsattr(wtypes.IntegerType())

View File

@ -1,57 +0,0 @@
# Copyright 2016 Blue Box, an IBM Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from wsme import types as wtypes
from octavia.api.common import types as base
from octavia.common import constants
class L7RuleResponse(base.BaseType):
"""Defines which attributes are to be shown on any response."""
id = wtypes.wsattr(wtypes.UuidType())
type = wtypes.wsattr(wtypes.StringType())
compare_type = wtypes.wsattr(wtypes.StringType())
key = wtypes.wsattr(wtypes.StringType())
value = wtypes.wsattr(wtypes.StringType())
invert = wtypes.wsattr(bool)
class L7RulePOST(base.BaseType):
"""Defines mandatory and optional attributes of a POST request."""
id = wtypes.wsattr(wtypes.UuidType())
type = wtypes.wsattr(
wtypes.Enum(str,
*constants.SUPPORTED_L7RULE_TYPES),
mandatory=True)
compare_type = wtypes.wsattr(
wtypes.Enum(str,
*constants.SUPPORTED_L7RULE_COMPARE_TYPES),
mandatory=True)
key = wtypes.wsattr(wtypes.StringType(max_length=255))
value = wtypes.wsattr(wtypes.StringType(max_length=255), mandatory=True)
invert = wtypes.wsattr(bool, default=False)
class L7RulePUT(base.BaseType):
"""Defines attributes that are acceptable of a PUT request."""
type = wtypes.wsattr(
wtypes.Enum(str,
*constants.SUPPORTED_L7RULE_TYPES))
compare_type = wtypes.wsattr(
wtypes.Enum(str,
*constants.SUPPORTED_L7RULE_COMPARE_TYPES))
key = wtypes.wsattr(wtypes.StringType(max_length=255))
value = wtypes.wsattr(wtypes.StringType(max_length=255))
invert = wtypes.wsattr(bool)

View File

@ -1,112 +0,0 @@
# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from wsme import types as wtypes
from octavia.api.common import types as base
from octavia.api.v1.types import l7policy
from octavia.api.v1.types import pool
from octavia.common import constants
class TLSTermination(base.BaseType):
certificate = wtypes.wsattr(wtypes.StringType())
intermediate_certificate = wtypes.wsattr(wtypes.StringType())
private_key = wtypes.wsattr(wtypes.StringType())
passphrase = wtypes.wsattr(wtypes.StringType())
class ListenerResponse(base.BaseType):
"""Defines which attributes are to be shown on any response."""
id = wtypes.wsattr(wtypes.UuidType())
name = wtypes.wsattr(wtypes.StringType())
description = wtypes.wsattr(wtypes.StringType())
provisioning_status = wtypes.wsattr(wtypes.StringType())
operating_status = wtypes.wsattr(wtypes.StringType())
enabled = wtypes.wsattr(bool)
protocol = wtypes.wsattr(wtypes.text)
protocol_port = wtypes.wsattr(wtypes.IntegerType())
connection_limit = wtypes.wsattr(wtypes.IntegerType())
tls_certificate_id = wtypes.wsattr(wtypes.StringType(max_length=255))
sni_containers = [wtypes.StringType(max_length=255)]
project_id = wtypes.wsattr(wtypes.StringType())
default_pool_id = wtypes.wsattr(wtypes.UuidType())
default_pool = wtypes.wsattr(pool.PoolResponse)
l7policies = wtypes.wsattr([l7policy.L7PolicyResponse])
insert_headers = wtypes.wsattr(wtypes.DictType(str, str))
created_at = wtypes.wsattr(wtypes.datetime.datetime)
updated_at = wtypes.wsattr(wtypes.datetime.datetime)
@classmethod
def from_data_model(cls, data_model, children=False):
listener = super(ListenerResponse, cls).from_data_model(
data_model, children=children)
# NOTE(blogan): we should show sni_containers for every call to show
# a listener
listener.sni_containers = [sni_c.tls_container_id
for sni_c in data_model.sni_containers]
if not children:
# NOTE(blogan): do not show default_pool if the request does not
# want to see children
del listener.default_pool
del listener.l7policies
return listener
if data_model.default_pool:
listener.default_pool = pool.PoolResponse.from_data_model(
data_model.default_pool, children=children)
if data_model.l7policies:
listener.l7policies = [l7policy.L7PolicyResponse.from_data_model(
policy, children=children) for policy in data_model.l7policies]
if not listener.default_pool:
del listener.default_pool
del listener.default_pool_id
if not listener.l7policies or len(listener.l7policies) <= 0:
del listener.l7policies
return listener
class ListenerPOST(base.BaseType):
"""Defines mandatory and optional attributes of a POST request."""
id = wtypes.wsattr(wtypes.UuidType())
name = wtypes.wsattr(wtypes.StringType(max_length=255))
description = wtypes.wsattr(wtypes.StringType(max_length=255))
enabled = wtypes.wsattr(bool, default=True)
protocol = wtypes.wsattr(wtypes.Enum(str, *constants.SUPPORTED_PROTOCOLS),
mandatory=True)
protocol_port = wtypes.wsattr(wtypes.IntegerType(), mandatory=True)
connection_limit = wtypes.wsattr(wtypes.IntegerType())
tls_certificate_id = wtypes.wsattr(wtypes.StringType(max_length=255))
tls_termination = wtypes.wsattr(TLSTermination)
sni_containers = [wtypes.StringType(max_length=255)]
# TODO(johnsom) Remove after deprecation (R series)
project_id = wtypes.wsattr(wtypes.StringType(max_length=36))
default_pool_id = wtypes.wsattr(wtypes.UuidType())
default_pool = wtypes.wsattr(pool.PoolPOST)
l7policies = wtypes.wsattr([l7policy.L7PolicyPOST], default=[])
insert_headers = wtypes.wsattr(wtypes.DictType(str, str))
class ListenerPUT(base.BaseType):
"""Defines attributes that are acceptable of a PUT request."""
name = wtypes.wsattr(wtypes.StringType(max_length=255))
description = wtypes.wsattr(wtypes.StringType(max_length=255))
enabled = wtypes.wsattr(bool)
protocol = wtypes.wsattr(wtypes.Enum(str, *constants.SUPPORTED_PROTOCOLS))
protocol_port = wtypes.wsattr(wtypes.IntegerType())
connection_limit = wtypes.wsattr(wtypes.IntegerType())
tls_certificate_id = wtypes.wsattr(wtypes.StringType(max_length=255))
tls_termination = wtypes.wsattr(TLSTermination)
sni_containers = [wtypes.StringType(max_length=255)]
default_pool_id = wtypes.wsattr(wtypes.UuidType())
insert_headers = wtypes.wsattr(wtypes.DictType(str, str))

View File

@ -1,25 +0,0 @@
# Copyright 2016 Blue Box, an IBM Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from wsme import types as wtypes
from octavia.api.common import types as base
class ListenerStatisticsResponse(base.BaseType):
bytes_in = wtypes.wsattr(wtypes.IntegerType())
bytes_out = wtypes.wsattr(wtypes.IntegerType())
active_connections = wtypes.wsattr(wtypes.IntegerType())
total_connections = wtypes.wsattr(wtypes.IntegerType())
request_errors = wtypes.wsattr(wtypes.IntegerType())

View File

@ -1,78 +0,0 @@
# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from wsme import types as wtypes
from octavia.api.common import types as base
from octavia.api.v1.types import listener
class VIP(base.BaseType):
"""Defines the response and acceptable POST request attributes."""
ip_address = wtypes.wsattr(base.IPAddressType())
port_id = wtypes.wsattr(wtypes.UuidType())
subnet_id = wtypes.wsattr(wtypes.UuidType())
network_id = wtypes.wsattr(wtypes.UuidType())
class LoadBalancerResponse(base.BaseType):
"""Defines which attributes are to be shown on any response."""
id = wtypes.wsattr(wtypes.UuidType())
name = wtypes.wsattr(wtypes.StringType())
description = wtypes.wsattr(wtypes.StringType())
provisioning_status = wtypes.wsattr(wtypes.StringType())
operating_status = wtypes.wsattr(wtypes.StringType())
enabled = wtypes.wsattr(bool)
vip = wtypes.wsattr(VIP)
project_id = wtypes.wsattr(wtypes.StringType())
listeners = wtypes.wsattr([listener.ListenerResponse])
created_at = wtypes.wsattr(wtypes.datetime.datetime)
updated_at = wtypes.wsattr(wtypes.datetime.datetime)
@classmethod
def from_data_model(cls, data_model, children=False):
lb = super(LoadBalancerResponse, cls).from_data_model(
data_model, children=children)
# NOTE(blogan): VIP is technically a child but its the main piece of
# a load balancer so it makes sense to show it no matter what.
lb.vip = VIP.from_data_model(data_model.vip)
if not children:
# NOTE(blogan): don't show listeners if the request does not want
# to see children
del lb.listeners
return lb
lb.listeners = [
listener.ListenerResponse.from_data_model(
listener_dm, children=children)
for listener_dm in data_model.listeners
]
return lb
class LoadBalancerPOST(base.BaseType):
"""Defines mandatory and optional attributes of a POST request."""
id = wtypes.wsattr(wtypes.UuidType())
name = wtypes.wsattr(wtypes.StringType(max_length=255))
description = wtypes.wsattr(wtypes.StringType(max_length=255))
enabled = wtypes.wsattr(bool, default=True)
vip = wtypes.wsattr(VIP, mandatory=True)
project_id = wtypes.wsattr(wtypes.StringType(max_length=36))
listeners = wtypes.wsattr([listener.ListenerPOST], default=[])
class LoadBalancerPUT(base.BaseType):
"""Defines attributes that are acceptable of a PUT request."""
name = wtypes.wsattr(wtypes.StringType(max_length=255))
description = wtypes.wsattr(wtypes.StringType(max_length=255))
enabled = wtypes.wsattr(bool)

View File

@ -1,50 +0,0 @@
# Copyright 2016 Blue Box, an IBM Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from wsme import types as wtypes
from octavia.api.common import types as base
from octavia.api.v1.types import listener_statistics
class ListenerStatistics(listener_statistics.ListenerStatisticsResponse):
id = wtypes.wsattr(wtypes.UuidType())
@classmethod
def from_data_model(cls, data_model, children=False):
ls_stats = super(ListenerStatistics, cls).from_data_model(
data_model, children=children)
ls_stats.id = data_model.listener_id
return ls_stats
class LoadBalancerStatisticsResponse(base.BaseType):
bytes_in = wtypes.wsattr(wtypes.IntegerType())
bytes_out = wtypes.wsattr(wtypes.IntegerType())
active_connections = wtypes.wsattr(wtypes.IntegerType())
total_connections = wtypes.wsattr(wtypes.IntegerType())
request_errors = wtypes.wsattr(wtypes.IntegerType())
listeners = wtypes.wsattr([ListenerStatistics])
@classmethod
def from_data_model(cls, data_model, children=False):
lb_stats = super(LoadBalancerStatisticsResponse, cls).from_data_model(
data_model, children=children)
lb_stats.listeners = [
ListenerStatistics.from_data_model(
listener_dm, children=children)
for listener_dm in data_model.listeners
]
return lb_stats

View File

@ -1,60 +0,0 @@
# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from wsme import types as wtypes
from octavia.api.common import types as base
from octavia.common import constants
class MemberResponse(base.BaseType):
"""Defines which attributes are to be shown on any response."""
id = wtypes.wsattr(wtypes.UuidType())
operating_status = wtypes.wsattr(wtypes.StringType())
enabled = wtypes.wsattr(bool)
ip_address = wtypes.wsattr(base.IPAddressType())
protocol_port = wtypes.wsattr(wtypes.IntegerType())
weight = wtypes.wsattr(wtypes.IntegerType())
subnet_id = wtypes.wsattr(wtypes.UuidType())
project_id = wtypes.wsattr(wtypes.StringType())
created_at = wtypes.wsattr(wtypes.datetime.datetime)
updated_at = wtypes.wsattr(wtypes.datetime.datetime)
monitor_address = wtypes.wsattr(base.IPAddressType())
monitor_port = wtypes.wsattr(wtypes.IntegerType())
class MemberPOST(base.BaseType):
"""Defines mandatory and optional attributes of a POST request."""
id = wtypes.wsattr(wtypes.UuidType())
enabled = wtypes.wsattr(bool, default=True)
ip_address = wtypes.wsattr(base.IPAddressType(), mandatory=True)
protocol_port = wtypes.wsattr(wtypes.IntegerType(), mandatory=True)
weight = wtypes.wsattr(wtypes.IntegerType(), default=1)
subnet_id = wtypes.wsattr(wtypes.UuidType())
# TODO(johnsom) Remove after deprecation (R series)
project_id = wtypes.wsattr(wtypes.StringType(max_length=36))
monitor_port = wtypes.wsattr(wtypes.IntegerType(
minimum=constants.MIN_PORT_NUMBER, maximum=constants.MAX_PORT_NUMBER),
default=None)
monitor_address = wtypes.wsattr(base.IPAddressType(), default=None)
class MemberPUT(base.BaseType):
"""Defines attributes that are acceptable of a PUT request."""
protocol_port = wtypes.wsattr(wtypes.IntegerType())
enabled = wtypes.wsattr(bool)
weight = wtypes.wsattr(wtypes.IntegerType())
monitor_address = wtypes.wsattr(base.IPAddressType())
monitor_port = wtypes.wsattr(wtypes.IntegerType(
minimum=constants.MIN_PORT_NUMBER, maximum=constants.MAX_PORT_NUMBER))

View File

@ -1,114 +0,0 @@
# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from wsme import types as wtypes
from octavia.api.common import types as base
from octavia.api.v1.types import health_monitor
from octavia.api.v1.types import member
from octavia.common import constants
class SessionPersistenceResponse(base.BaseType):
"""Defines which attributes are to be shown on any response."""
type = wtypes.wsattr(wtypes.text)
cookie_name = wtypes.wsattr(wtypes.text)
class SessionPersistencePOST(base.BaseType):
"""Defines mandatory and optional attributes of a POST request."""
type = wtypes.wsattr(wtypes.Enum(str, *constants.SUPPORTED_SP_TYPES),
mandatory=True)
cookie_name = wtypes.wsattr(wtypes.text)
class SessionPersistencePUT(base.BaseType):
"""Defines attributes that are acceptable of a PUT request."""
type = wtypes.wsattr(wtypes.Enum(str, *constants.SUPPORTED_SP_TYPES))
cookie_name = wtypes.wsattr(wtypes.text, default=None)
class PoolResponse(base.BaseType):
"""Defines which attributes are to be shown on any response."""
id = wtypes.wsattr(wtypes.UuidType())
name = wtypes.wsattr(wtypes.StringType())
description = wtypes.wsattr(wtypes.StringType())
operating_status = wtypes.wsattr(wtypes.StringType())
enabled = wtypes.wsattr(bool)
protocol = wtypes.wsattr(wtypes.text)
lb_algorithm = wtypes.wsattr(wtypes.text)
session_persistence = wtypes.wsattr(SessionPersistenceResponse)
project_id = wtypes.wsattr(wtypes.StringType())
health_monitor = wtypes.wsattr(health_monitor.HealthMonitorResponse)
members = wtypes.wsattr([member.MemberResponse])
created_at = wtypes.wsattr(wtypes.datetime.datetime)
updated_at = wtypes.wsattr(wtypes.datetime.datetime)
@classmethod
def from_data_model(cls, data_model, children=False):
pool = super(PoolResponse, cls).from_data_model(
data_model, children=children)
# NOTE(blogan): we should show session persistence on every request
# to show a pool
if data_model.session_persistence:
pool.session_persistence = (
SessionPersistenceResponse.from_data_model(
data_model.session_persistence))
if not children:
# NOTE(blogan): do not show members or health_monitor if the
# request does not want to see children
del pool.members
del pool.health_monitor
return pool
pool.members = [
member.MemberResponse.from_data_model(member_dm, children=children)
for member_dm in data_model.members
]
if data_model.health_monitor:
pool.health_monitor = (
health_monitor.HealthMonitorResponse.from_data_model(
data_model.health_monitor, children=children))
if not pool.health_monitor:
del pool.health_monitor
return pool
class PoolPOST(base.BaseType):
"""Defines mandatory and optional attributes of a POST request."""
id = wtypes.wsattr(wtypes.UuidType())
name = wtypes.wsattr(wtypes.StringType(max_length=255))
description = wtypes.wsattr(wtypes.StringType(max_length=255))
enabled = wtypes.wsattr(bool, default=True)
listener_id = wtypes.wsattr(wtypes.UuidType())
protocol = wtypes.wsattr(wtypes.Enum(str, *constants.SUPPORTED_PROTOCOLS),
mandatory=True)
lb_algorithm = wtypes.wsattr(
wtypes.Enum(str, *constants.SUPPORTED_LB_ALGORITHMS),
mandatory=True)
session_persistence = wtypes.wsattr(SessionPersistencePOST)
# TODO(johnsom) Remove after deprecation (R series)
project_id = wtypes.wsattr(wtypes.StringType(max_length=36))
health_monitor = wtypes.wsattr(health_monitor.HealthMonitorPOST)
members = wtypes.wsattr([member.MemberPOST])
class PoolPUT(base.BaseType):
"""Defines attributes that are acceptable of a PUT request."""
name = wtypes.wsattr(wtypes.StringType())
description = wtypes.wsattr(wtypes.StringType())
enabled = wtypes.wsattr(bool)
protocol = wtypes.wsattr(wtypes.Enum(str, *constants.SUPPORTED_PROTOCOLS))
lb_algorithm = wtypes.wsattr(
wtypes.Enum(str, *constants.SUPPORTED_LB_ALGORITHMS))
session_persistence = wtypes.wsattr(SessionPersistencePUT)

View File

@ -1,73 +0,0 @@
# Copyright 2016 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from wsme import types as wtypes
from octavia.api.common import types as base
class QuotaBase(base.BaseType):
"""Individual quota definitions."""
load_balancer = wtypes.wsattr(wtypes.IntegerType())
listener = wtypes.wsattr(wtypes.IntegerType())
member = wtypes.wsattr(wtypes.IntegerType())
pool = wtypes.wsattr(wtypes.IntegerType())
health_monitor = wtypes.wsattr(wtypes.IntegerType())
class QuotaResponse(base.BaseType):
"""Wrapper object for quotas responses."""
quota = wtypes.wsattr(QuotaBase)
@classmethod
def from_data_model(cls, data_model, children=False):
quotas = super(QuotaResponse, cls).from_data_model(
data_model, children=children)
quotas.quota = QuotaBase.from_data_model(data_model)
return quotas
class QuotaAllBase(base.BaseType):
"""Wrapper object for get all quotas responses."""
project_id = wtypes.wsattr(wtypes.StringType())
tenant_id = wtypes.wsattr(wtypes.StringType())
load_balancer = wtypes.wsattr(wtypes.IntegerType())
listener = wtypes.wsattr(wtypes.IntegerType())
member = wtypes.wsattr(wtypes.IntegerType())
pool = wtypes.wsattr(wtypes.IntegerType())
health_monitor = wtypes.wsattr(wtypes.IntegerType())
@classmethod
def from_data_model(cls, data_model, children=False):
quotas = super(QuotaAllBase, cls).from_data_model(
data_model, children=children)
quotas.tenant_id = quotas.project_id
return quotas
class QuotaAllResponse(base.BaseType):
quotas = wtypes.wsattr([QuotaAllBase])
@classmethod
def from_data_model(cls, data_model, children=False):
quotalist = QuotaAllResponse()
quotalist.quotas = [
QuotaAllBase.from_data_model(obj)
for obj in data_model]
return quotalist
class QuotaPUT(base.BaseType):
"""Overall object for quota PUT request."""
quota = wtypes.wsattr(QuotaBase)

View File

@ -52,8 +52,6 @@ api_opts = [
constants.KEYSTONE,
constants.TESTING],
help=_("The auth strategy for API requests.")),
cfg.StrOpt('api_handler', default='queue_producer',
help=_("The handler that the API communicates with")),
cfg.BoolOpt('allow_pagination', default=True,
help=_("Allow the usage of pagination")),
cfg.BoolOpt('allow_sorting', default=True,
@ -71,10 +69,6 @@ api_opts = [
help=_("Base URI for the API for use in pagination links. "
"This will be autodetected from the request if not "
"overridden here.")),
cfg.BoolOpt('api_v1_enabled', default=True,
help=_("Expose the v1 API?")),
cfg.BoolOpt('api_v2_enabled', default=True,
help=_("Expose the v2 API?")),
cfg.BoolOpt('allow_tls_terminated_listeners', default=True,
help=_("Allow users to create TLS Terminated listeners?")),
cfg.BoolOpt('allow_ping_health_monitors', default=True,
@ -206,27 +200,10 @@ healthmanager_opts = [
help=_('Driver for updating amphora health system.')),
cfg.StrOpt('stats_update_driver', default='stats_db',
help=_('Driver for updating amphora statistics.')),
# Used for synchronizing neutron-lbaas and octavia
cfg.StrOpt('event_streamer_driver',
help=_('Specifies which driver to use for the event_streamer '
'for syncing the octavia and neutron_lbaas dbs. If you '
'don\'t need to sync the database or are running '
'octavia in stand alone mode use the '
'noop_event_streamer'),
default='noop_event_streamer'),
cfg.BoolOpt('sync_provisioning_status', default=False,
help=_("Enable provisioning status sync with neutron db"))]
]
oslo_messaging_opts = [
cfg.StrOpt('topic'),
cfg.StrOpt('event_stream_topic',
default='neutron_lbaas_event',
help=_('topic name for communicating events through a queue')),
cfg.StrOpt('event_stream_transport_url', default=None,
help=_('Transport URL to use for the neutron-lbaas '
'synchronization event stream when neutron and octavia '
'have separate queues.')),
]
haproxy_amphora_opts = [

View File

@ -523,8 +523,6 @@ MAX_QUOTA = 2000000000
API_VERSION = '0.5'
NOOP_EVENT_STREAMER = 'noop_event_streamer'
HAPROXY_BASE_PEER_PORT = 1025
KEEPALIVED_JINJA2_UPSTART = 'keepalived.upstart.j2'
KEEPALIVED_JINJA2_SYSTEMD = 'keepalived.systemd.j2'

View File

@ -25,7 +25,6 @@ from stevedore import driver as stevedore_driver
from octavia.common import constants
from octavia.common import stats
from octavia.controller.healthmanager.health_drivers import update_base
from octavia.controller.healthmanager import update_serializer
from octavia.db import api as db_api
from octavia.db import repositories as repo
@ -37,10 +36,6 @@ class UpdateHealthDb(update_base.HealthUpdateBase):
def __init__(self):
super(UpdateHealthDb, self).__init__()
# first setup repo for amphora, listener,member(nodes),pool repo
self.event_streamer = stevedore_driver.DriverManager(
namespace='octavia.controller.queues',
name=CONF.health_manager.event_streamer_driver,
invoke_on_load=True).driver
self.amphora_repo = repo.AmphoraRepository()
self.amphora_health_repo = repo.AmphoraHealthRepository()
self.listener_repo = repo.ListenerRepository()
@ -48,12 +43,8 @@ class UpdateHealthDb(update_base.HealthUpdateBase):
self.member_repo = repo.MemberRepository()
self.pool_repo = repo.PoolRepository()
def emit(self, info_type, info_id, info_obj):
cnt = update_serializer.InfoContainer(info_type, info_id, info_obj)
self.event_streamer.emit(cnt)
def _update_status_and_emit_event(self, session, repo, entity_type,
entity_id, new_op_status, old_op_status):
def _update_status(self, session, repo, entity_type,
entity_id, new_op_status, old_op_status):
message = {}
if old_op_status.lower() != new_op_status.lower():
LOG.debug("%s %s status has changed from %s to "
@ -61,22 +52,10 @@ class UpdateHealthDb(update_base.HealthUpdateBase):
entity_type, entity_id, old_op_status,
new_op_status)
repo.update(session, entity_id, operating_status=new_op_status)
# Map the status for neutron-lbaas
# Map the status for neutron-lbaas compatibility
if new_op_status == constants.DRAINING:
new_op_status = constants.ONLINE
message.update({constants.OPERATING_STATUS: new_op_status})
if (CONF.health_manager.event_streamer_driver !=
constants.NOOP_EVENT_STREAMER):
if CONF.health_manager.sync_provisioning_status:
current_prov_status = repo.get(
session, id=entity_id).provisioning_status
LOG.debug("%s %s provisioning_status %s. "
"Sending event.",
entity_type, entity_id, current_prov_status)
message.update(
{constants.PROVISIONING_STATUS: current_prov_status})
if message:
self.emit(entity_type, entity_id, message)
def update_health(self, health, srcaddr):
# The executor will eat any exceptions from the update_health code
@ -277,7 +256,7 @@ class UpdateHealthDb(update_base.HealthUpdateBase):
try:
if (listener_status is not None and
listener_status != db_op_status):
self._update_status_and_emit_event(
self._update_status(
session, self.listener_repo, constants.LISTENER,
listener_id, listener_status, db_op_status)
except sqlalchemy.orm.exc.NoResultFound:
@ -305,7 +284,7 @@ class UpdateHealthDb(update_base.HealthUpdateBase):
try:
# If the database doesn't already show the pool offline, update
if potential_offline_pools[pool_id] != constants.OFFLINE:
self._update_status_and_emit_event(
self._update_status(
session, self.pool_repo, constants.POOL,
pool_id, constants.OFFLINE,
potential_offline_pools[pool_id])
@ -315,7 +294,7 @@ class UpdateHealthDb(update_base.HealthUpdateBase):
# Update the load balancer status last
try:
if lb_status != db_lb['operating_status']:
self._update_status_and_emit_event(
self._update_status(
session, self.loadbalancer_repo,
constants.LOADBALANCER, db_lb['id'], lb_status,
db_lb[constants.OPERATING_STATUS])
@ -393,7 +372,7 @@ class UpdateHealthDb(update_base.HealthUpdateBase):
try:
if (member_status is not None and
member_status != member_db_status):
self._update_status_and_emit_event(
self._update_status(
session, self.member_repo, constants.MEMBER,
member_id, member_status, member_db_status)
except sqlalchemy.orm.exc.NoResultFound:
@ -403,7 +382,7 @@ class UpdateHealthDb(update_base.HealthUpdateBase):
try:
if (pool_status is not None and
pool_status != db_pool_dict['operating_status']):
self._update_status_and_emit_event(
self._update_status(
session, self.pool_repo, constants.POOL,
pool_id, pool_status, db_pool_dict['operating_status'])
except sqlalchemy.orm.exc.NoResultFound:
@ -416,16 +395,8 @@ class UpdateStatsDb(update_base.StatsUpdateBase, stats.StatsMixin):
def __init__(self):
super(UpdateStatsDb, self).__init__()
self.event_streamer = stevedore_driver.DriverManager(
namespace='octavia.controller.queues',
name=CONF.health_manager.event_streamer_driver,
invoke_on_load=True).driver
self.repo_listener = repo.ListenerRepository()
def emit(self, info_type, info_id, info_obj):
cnt = update_serializer.InfoContainer(info_type, info_id, info_obj)
self.event_streamer.emit(cnt)
def update_stats(self, health_message, srcaddr):
# The executor will eat any exceptions from the update_stats code
# so we need to wrap it and log the unhandled exception
@ -484,21 +455,3 @@ class UpdateStatsDb(update_base.StatsUpdateBase, stats.StatsMixin):
listener_id, amphora_id, stats)
self.listener_stats_repo.replace(
session, listener_id, amphora_id, **stats)
if (CONF.health_manager.event_streamer_driver !=
constants.NOOP_EVENT_STREAMER):
listener_stats = self.get_listener_stats(session, listener_id)
self.emit(
'listener_stats', listener_id, listener_stats.get_stats())
listener_db = self.repo_listener.get(session, id=listener_id)
if not listener_db:
LOG.debug('Received health stats for a non-existent '
'listener %s for amphora %s with IP '
'%s.', listener_id, amphora_id, srcaddr)
return
lb_stats = self.get_loadbalancer_stats(
session, listener_db.load_balancer_id)
self.emit('loadbalancer_stats',
listener_db.load_balancer_id, lb_stats.get_stats())

View File

@ -1,47 +0,0 @@
# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
class InfoContainer(object):
@staticmethod
def from_dict(dict_obj):
return InfoContainer(dict_obj['info_type'],
dict_obj['info_id'],
dict_obj['info_payload'])
def __init__(self, info_type, info_id, info_payload):
self.info_type = copy.copy(info_type)
self.info_id = copy.copy(info_id)
self.info_payload = copy.deepcopy(info_payload)
def to_dict(self):
return {'info_type': self.info_type,
'info_id': self.info_id,
'info_payload': self.info_payload}
def __eq__(self, other):
if not isinstance(other, InfoContainer):
return False
if self.info_type != other.info_type:
return False
if self.info_id != other.info_id:
return False
if self.info_payload != other.info_payload:
return False
return True
def __ne__(self, other):
return not self == other

View File

@ -1,76 +0,0 @@
# Copyright 2015 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
from oslo_config import cfg
from oslo_log import log as logging
import oslo_messaging
import six
LOG = logging.getLogger(__name__)
@six.add_metaclass(abc.ABCMeta)
class EventStreamerBase(object):
"""Base class for EventStreamer
A stand in abstract class that defines what methods are stevedore loaded
implementations of event streamer is expected to provide.
"""
@abc.abstractmethod
def emit(self, cnt):
"""method to send a DB event to neutron-lbaas if it is needed.
:param cnt: an InfoContainer container object
:return: None
"""
class EventStreamerNoop(EventStreamerBase):
"""Nop class implementation of EventStreamer
Useful if you're running in standalone mode and don't need to send
updates to Neutron LBaaS
"""
def emit(self, cnt):
pass
class EventStreamerNeutron(EventStreamerBase):
"""Neutron LBaaS
When you're using Octavia alongside neutron LBaaS this class provides
a mechanism to send updates to neutron LBaaS database via
oslo_messaging queues.
"""
def __init__(self):
topic = cfg.CONF.oslo_messaging.event_stream_topic
if cfg.CONF.oslo_messaging.event_stream_transport_url:
# Use custom URL
self.transport = oslo_messaging.get_rpc_transport(
cfg.CONF, cfg.CONF.oslo_messaging.event_stream_transport_url)
else:
self.transport = oslo_messaging.get_rpc_transport(cfg.CONF)
self.target = oslo_messaging.Target(topic=topic, exchange="common",
namespace='control', fanout=False,
version='1.0')
self.client = oslo_messaging.RPCClient(self.transport, self.target)
def emit(self, cnt):
LOG.debug("Emitting data to event streamer %s", cnt.to_dict())
self.client.cast({}, 'update_info', container=cnt.to_dict())

View File

@ -15,7 +15,7 @@
from oslo_config import cfg
from oslo_utils import uuidutils
from octavia.api.v1.types import l7rule
from octavia.api.v2.types import l7rule
from octavia.common import constants
from octavia.common import exceptions
from octavia.common import validate

View File

@ -35,19 +35,15 @@ class TestRootController(base_db_test.OctaviaDBTestBase):
self.conf = self.useFixture(oslo_fixture.Config(cfg.CONF))
self.conf.config(group='api_settings', auth_strategy=constants.NOAUTH)
def _get_versions_with_config(self, api_v1_enabled, api_v2_enabled):
self.conf.config(group='api_settings', api_v1_enabled=api_v1_enabled)
self.conf.config(group='api_settings', api_v2_enabled=api_v2_enabled)
def _get_versions_with_config(self):
app = pecan.testing.load_test_app({'app': pconfig.app,
'wsme': pconfig.wsme})
return self.get(app=app, path='/').json.get('versions', None)
def test_api_versions(self):
versions = self._get_versions_with_config(
api_v1_enabled=True, api_v2_enabled=True)
versions = self._get_versions_with_config()
version_ids = tuple(v.get('id') for v in versions)
self.assertEqual(12, len(version_ids))
self.assertIn('v1', version_ids)
self.assertEqual(11, len(version_ids))
self.assertIn('v2.0', version_ids)
self.assertIn('v2.1', version_ids)
self.assertIn('v2.2', version_ids)
@ -63,41 +59,9 @@ class TestRootController(base_db_test.OctaviaDBTestBase):
# Each version should have a 'self' 'href' to the API version URL
# [{u'rel': u'self', u'href': u'http://localhost/v2'}]
# Validate that the URL exists in the response
version_url = 'http://localhost/v2'
for version in versions:
url_version = None
if version['id'].startswith('v2.'):
url_version = 'v2'
else:
url_version = version['id']
version_url = 'http://localhost/{}'.format(url_version)
links = version['links']
# Note, there may be other links present, this test is for 'self'
version_link = [link for link in links if link['rel'] == 'self']
self.assertEqual(version_url, version_link[0]['href'])
def test_api_v1_disabled(self):
versions = self._get_versions_with_config(
api_v1_enabled=False, api_v2_enabled=True)
self.assertEqual(11, len(versions))
self.assertEqual('v2.0', versions[0].get('id'))
self.assertEqual('v2.1', versions[1].get('id'))
self.assertEqual('v2.2', versions[2].get('id'))
self.assertEqual('v2.3', versions[3].get('id'))
self.assertEqual('v2.4', versions[4].get('id'))
self.assertEqual('v2.5', versions[5].get('id'))
self.assertEqual('v2.6', versions[6].get('id'))
self.assertEqual('v2.7', versions[7].get('id'))
self.assertEqual('v2.8', versions[8].get('id'))
self.assertEqual('v2.9', versions[9].get('id'))
self.assertEqual('v2.10', versions[10].get('id'))
def test_api_v2_disabled(self):
versions = self._get_versions_with_config(
api_v1_enabled=True, api_v2_enabled=False)
self.assertEqual(1, len(versions))
self.assertEqual('v1', versions[0].get('id'))
def test_api_both_disabled(self):
versions = self._get_versions_with_config(
api_v1_enabled=False, api_v2_enabled=False)
self.assertEqual(0, len(versions))

View File

@ -1,11 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,320 +0,0 @@
# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from oslo_config import cfg
from oslo_config import fixture as oslo_fixture
from oslo_utils import uuidutils
import pecan
import pecan.testing
from octavia.api import config as pconfig
# needed for tests to function when run independently:
from octavia.common import config # noqa: F401
from octavia.common import constants
from octavia.db import api as db_api
from octavia.db import repositories
from octavia.tests.functional.db import base as base_db_test
class BaseAPITest(base_db_test.OctaviaDBTestBase):
BASE_PATH = '/v1'
QUOTAS_PATH = '/quotas'
QUOTA_PATH = QUOTAS_PATH + '/{project_id}'
QUOTA_DEFAULT_PATH = QUOTAS_PATH + '/{project_id}/default'
LBS_PATH = '/loadbalancers'
LB_PATH = LBS_PATH + '/{lb_id}'
LB_DELETE_CASCADE_PATH = LB_PATH + '/delete_cascade'
LB_STATS_PATH = LB_PATH + '/stats'
LISTENERS_PATH = LB_PATH + '/listeners'
LISTENER_PATH = LISTENERS_PATH + '/{listener_id}'
LISTENER_STATS_PATH = LISTENER_PATH + '/stats'
POOLS_PATH = LB_PATH + '/pools'
POOL_PATH = POOLS_PATH + '/{pool_id}'
DEPRECATED_POOLS_PATH = LISTENER_PATH + '/pools'
DEPRECATED_POOL_PATH = DEPRECATED_POOLS_PATH + '/{pool_id}'
MEMBERS_PATH = POOL_PATH + '/members'
MEMBER_PATH = MEMBERS_PATH + '/{member_id}'
DEPRECATED_MEMBERS_PATH = DEPRECATED_POOL_PATH + '/members'
DEPRECATED_MEMBER_PATH = DEPRECATED_MEMBERS_PATH + '/{member_id}'
HM_PATH = POOL_PATH + '/healthmonitor'
DEPRECATED_HM_PATH = DEPRECATED_POOL_PATH + '/healthmonitor'
L7POLICIES_PATH = LISTENER_PATH + '/l7policies'
L7POLICY_PATH = L7POLICIES_PATH + '/{l7policy_id}'
L7RULES_PATH = L7POLICY_PATH + '/l7rules'
L7RULE_PATH = L7RULES_PATH + '/{l7rule_id}'
def setUp(self):
super(BaseAPITest, self).setUp()
conf = self.useFixture(oslo_fixture.Config(cfg.CONF))
conf.config(group='api_settings', api_handler='simulated_handler')
conf.config(group="controller_worker",
network_driver='network_noop_driver')
conf.config(group='api_settings', auth_strategy=constants.NOAUTH)
self.lb_repo = repositories.LoadBalancerRepository()
self.listener_repo = repositories.ListenerRepository()
self.listener_stats_repo = repositories.ListenerStatisticsRepository()
self.pool_repo = repositories.PoolRepository()
self.member_repo = repositories.MemberRepository()
self.amphora_repo = repositories.AmphoraRepository()
patcher = mock.patch('octavia.api.handlers.controller_simulator.'
'handler.SimulatedControllerHandler')
self.handler_mock = patcher.start()
self.check_quota_met_true_mock = mock.patch(
'octavia.db.repositories.Repositories.check_quota_met',
return_value=True)
self.app = self._make_app()
self.project_id = uuidutils.generate_uuid()
def reset_pecan():
patcher.stop()
pecan.set_config({}, overwrite=True)
self.addCleanup(reset_pecan)
def _make_app(self):
return pecan.testing.load_test_app(
{'app': pconfig.app, 'wsme': pconfig.wsme})
def _get_full_path(self, path):
return ''.join([self.BASE_PATH, path])
def delete(self, path, headers=None, status=202, expect_errors=False):
headers = headers or {}
full_path = self._get_full_path(path)
response = self.app.delete(full_path,
headers=headers,
status=status,
expect_errors=expect_errors)
return response
def post(self, path, body, headers=None, status=202, expect_errors=False):
headers = headers or {}
full_path = self._get_full_path(path)
response = self.app.post_json(full_path,
params=body,
headers=headers,
status=status,
expect_errors=expect_errors)
return response
def put(self, path, body, headers=None, status=202, expect_errors=False):
headers = headers or {}
full_path = self._get_full_path(path)
response = self.app.put_json(full_path,
params=body,
headers=headers,
status=status,
expect_errors=expect_errors)
return response
def get(self, path, params=None, headers=None, status=200,
expect_errors=False):
full_path = self._get_full_path(path)
response = self.app.get(full_path,
params=params,
headers=headers,
status=status,
expect_errors=expect_errors)
return response
def create_load_balancer(self, vip, **optionals):
req_dict = {'vip': vip, 'project_id': self.project_id}
req_dict.update(optionals)
response = self.post(self.LBS_PATH, req_dict)
return response.json
def create_listener(self, lb_id, protocol, protocol_port, **optionals):
req_dict = {'protocol': protocol, 'protocol_port': protocol_port,
'project_id': self.project_id}
req_dict.update(optionals)
path = self.LISTENERS_PATH.format(lb_id=lb_id)
response = self.post(path, req_dict)
return response.json
def create_listener_stats(self, listener_id, amphora_id):
db_ls = self.listener_stats_repo.create(
db_api.get_session(), listener_id=listener_id,
amphora_id=amphora_id, bytes_in=0,
bytes_out=0, active_connections=0, total_connections=0,
request_errors=0)
return db_ls.to_dict()
def create_amphora(self, amphora_id, loadbalancer_id, **optionals):
# We need to default these values in the request.
opts = {'compute_id': uuidutils.generate_uuid(),
'status': constants.ACTIVE}
opts.update(optionals)
amphora = self.amphora_repo.create(
self.session, id=amphora_id,
load_balancer_id=loadbalancer_id,
**opts)
return amphora
def get_listener(self, lb_id, listener_id):
path = self.LISTENER_PATH.format(lb_id=lb_id, listener_id=listener_id)
response = self.get(path)
return response.json
def create_pool_sans_listener(self, lb_id, protocol, lb_algorithm,
**optionals):
req_dict = {'protocol': protocol, 'lb_algorithm': lb_algorithm,
'project_id': self.project_id}
req_dict.update(optionals)
path = self.POOLS_PATH.format(lb_id=lb_id)
response = self.post(path, req_dict)
return response.json
def create_pool(self, lb_id, listener_id, protocol, lb_algorithm,
**optionals):
req_dict = {'protocol': protocol, 'lb_algorithm': lb_algorithm,
'project_id': self.project_id}
req_dict.update(optionals)
path = self.DEPRECATED_POOLS_PATH.format(lb_id=lb_id,
listener_id=listener_id)
response = self.post(path, req_dict)
return response.json
def create_member(self, lb_id, pool_id, ip_address,
protocol_port, expect_error=False, **optionals):
req_dict = {'ip_address': ip_address, 'protocol_port': protocol_port,
'project_id': self.project_id}
req_dict.update(optionals)
path = self.MEMBERS_PATH.format(lb_id=lb_id, pool_id=pool_id)
response = self.post(path, req_dict, expect_errors=expect_error)
return response.json
def create_member_with_listener(self, lb_id, listener_id, pool_id,
ip_address, protocol_port, **optionals):
req_dict = {'ip_address': ip_address, 'protocol_port': protocol_port,
'project_id': self.project_id}
req_dict.update(optionals)
path = self.DEPRECATED_MEMBERS_PATH.format(
lb_id=lb_id, listener_id=listener_id, pool_id=pool_id)
response = self.post(path, req_dict)
return response.json
def create_health_monitor(self, lb_id, pool_id, type,
delay, timeout, fall_threshold, rise_threshold,
**optionals):
req_dict = {'type': type,
'delay': delay,
'timeout': timeout,
'fall_threshold': fall_threshold,
'rise_threshold': rise_threshold,
'project_id': self.project_id}
req_dict.update(optionals)
path = self.HM_PATH.format(lb_id=lb_id,
pool_id=pool_id)
response = self.post(path, req_dict)
return response.json
def create_health_monitor_with_listener(
self, lb_id, listener_id, pool_id, type,
delay, timeout, fall_threshold, rise_threshold, **optionals):
req_dict = {'type': type,
'delay': delay,
'timeout': timeout,
'fall_threshold': fall_threshold,
'rise_threshold': rise_threshold,
'project_id': self.project_id}
req_dict.update(optionals)
path = self.DEPRECATED_HM_PATH.format(
lb_id=lb_id, listener_id=listener_id, pool_id=pool_id)
response = self.post(path, req_dict)
return response.json
def create_l7policy(self, lb_id, listener_id, action, **optionals):
req_dict = {'action': action}
req_dict.update(optionals)
path = self.L7POLICIES_PATH.format(lb_id=lb_id,
listener_id=listener_id)
response = self.post(path, req_dict)
return response.json
def create_l7rule(self, lb_id, listener_id, l7policy_id, type,
compare_type, value, **optionals):
req_dict = {'type': type, 'compare_type': compare_type, 'value': value}
req_dict.update(optionals)
path = self.L7RULES_PATH.format(lb_id=lb_id, listener_id=listener_id,
l7policy_id=l7policy_id)
response = self.post(path, req_dict)
return response.json
def _set_lb_and_children_statuses(self, lb_id, prov_status, op_status):
self.lb_repo.update(db_api.get_session(), lb_id,
provisioning_status=prov_status,
operating_status=op_status)
lb_listeners, _ = self.listener_repo.get_all(
db_api.get_session(), load_balancer_id=lb_id)
for listener in lb_listeners:
for pool in listener.pools:
self.pool_repo.update(db_api.get_session(), pool.id,
operating_status=op_status)
for member in pool.members:
self.member_repo.update(db_api.get_session(), member.id,
operating_status=op_status)
self.listener_repo.update(db_api.get_session(), listener.id,
provisioning_status=prov_status,
operating_status=op_status)
def set_lb_status(self, lb_id, status=constants.ACTIVE):
if status == constants.DELETED:
op_status = constants.OFFLINE
elif status == constants.ACTIVE:
op_status = constants.ONLINE
else:
db_lb = self.lb_repo.get(db_api.get_session(), id=lb_id)
op_status = db_lb.operating_status
self._set_lb_and_children_statuses(lb_id, status, op_status)
return self.get(self.LB_PATH.format(lb_id=lb_id)).json
def assert_final_lb_statuses(self, lb_id, delete=False):
expected_prov_status = constants.ACTIVE
expected_op_status = constants.ONLINE
if delete:
expected_prov_status = constants.DELETED
expected_op_status = constants.OFFLINE
self.set_lb_status(lb_id, status=expected_prov_status)
self.assert_correct_lb_status(lb_id, expected_prov_status,
expected_op_status)
def assert_final_listener_statuses(self, lb_id, listener_id, delete=False):
expected_prov_status = constants.ACTIVE
expected_op_status = constants.ONLINE
if delete:
expected_prov_status = constants.DELETED
expected_op_status = constants.OFFLINE
self.set_lb_status(lb_id, status=expected_prov_status)
self.assert_correct_listener_status(lb_id, listener_id,
expected_prov_status,
expected_op_status)
def assert_correct_lb_status(self, lb_id, provisioning_status,
operating_status):
api_lb = self.get(self.LB_PATH.format(lb_id=lb_id)).json
self.assertEqual(provisioning_status,
api_lb.get('provisioning_status'))
self.assertEqual(operating_status,
api_lb.get('operating_status'))
def assert_correct_listener_status(self, lb_id, listener_id,
provisioning_status, operating_status):
api_listener = self.get(self.LISTENER_PATH.format(
lb_id=lb_id, listener_id=listener_id)).json
self.assertEqual(provisioning_status,
api_listener.get('provisioning_status'))
self.assertEqual(operating_status,
api_listener.get('operating_status'))

View File

@ -1,225 +0,0 @@
# Copyright 2016 Rackspace
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import operator
from oslo_serialization import jsonutils as json
from oslo_utils import uuidutils
from octavia.common import constants
from octavia.tests.functional.api.v1 import base
class TestApiSort(base.BaseAPITest):
def setUp(self):
super(TestApiSort, self).setUp()
self.random_name_desc = [
('b', 'g'), ('h', 'g'), ('b', 'a'), ('c', 'g'),
('g', 'c'), ('h', 'h'), ('a', 'e'), ('g', 'h'),
('g', 'd'), ('e', 'h'), ('h', 'e'), ('b', 'f'),
('b', 'h'), ('a', 'h'), ('g', 'g'), ('h', 'f'),
('c', 'h'), ('g', 'f'), ('f', 'f'), ('d', 'd'),
('g', 'b'), ('a', 'c'), ('h', 'a'), ('h', 'c'),
('e', 'd'), ('d', 'g'), ('c', 'b'), ('f', 'b'),
('c', 'c'), ('d', 'c'), ('f', 'a'), ('h', 'd'),
('f', 'c'), ('d', 'a'), ('d', 'e'), ('d', 'f'),
('g', 'e'), ('a', 'a'), ('e', 'c'), ('e', 'b'),
('f', 'g'), ('d', 'b'), ('e', 'a'), ('b', 'e'),
('f', 'h'), ('a', 'g'), ('c', 'd'), ('b', 'd'),
('b', 'b'), ('a', 'b'), ('f', 'd'), ('f', 'e'),
('c', 'a'), ('b', 'c'), ('e', 'f'), ('a', 'f'),
('e', 'e'), ('h', 'b'), ('d', 'h'), ('e', 'g'),
('c', 'e'), ('g', 'a'), ('a', 'd'), ('c', 'f')]
self.headers = {'accept': constants.APPLICATION_JSON,
'content-type': constants.APPLICATION_JSON}
self.lbs = []
self.lb_names = ['lb_c', 'lb_a', 'lb_b', 'lb_e', 'lb_d']
def _create_loadbalancers(self):
for name in self.lb_names:
lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()}, name=name)
self.lbs.append(lb)
def test_lb_keysort(self):
self._create_loadbalancers()
params = {'sort': 'name:desc',
'project_id': self.project_id}
resp = self.get(self.LBS_PATH, params=params,
headers=self.headers)
lbs = json.loads(resp.body)
act_names = [l['name'] for l in lbs]
ref_names = sorted(self.lb_names[:], reverse=True)
self.assertEqual(ref_names, act_names) # Should be in order
def test_loadbalancer_sorting_and_pagination(self):
# Python's stable sort will allow us to simulate the full sorting
# capabilities of the api during testing.
exp_order = self.random_name_desc[:]
exp_order.sort(key=operator.itemgetter(1), reverse=False)
exp_order.sort(key=operator.itemgetter(0), reverse=True)
for (name, desc) in self.random_name_desc:
self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()},
name=name, description=desc)
params = {'sort': 'name:desc,description:asc',
'project_id': self.project_id}
# Get all lbs
resp = self.get(self.LBS_PATH, headers=self.headers, params=params)
all_lbs = json.loads(resp.body)
# Test the first 8 which is just limit=8
params.update({'limit': '8'})
resp = self.get(self.LBS_PATH, headers=self.headers, params=params)
lbs = json.loads(resp.body)
fnd_name_descs = [(lb['name'], lb['description']) for lb in lbs]
self.assertEqual(exp_order[0:8], fnd_name_descs)
# Test the slice at 8:24 which is marker=7 limit=16
params.update({'marker': all_lbs[7].get('id'), 'limit': '16'})
resp = self.get(self.LBS_PATH, headers=self.headers, params=params)
lbs = json.loads(resp.body)
fnd_name_descs = [(lb['name'], lb['description']) for lb in lbs]
self.assertEqual(exp_order[8:24], fnd_name_descs)
# Test the slice at 32:56 which is marker=31 limit=24
params.update({'marker': all_lbs[31].get('id'), 'limit': '24'})
resp = self.get(self.LBS_PATH, headers=self.headers, params=params)
lbs = json.loads(resp.body)
fnd_name_descs = [(lb['name'], lb['description']) for lb in lbs]
self.assertEqual(exp_order[32:56], fnd_name_descs)
# Test the last 8 entries which is slice 56:64 marker=55 limit=8
params.update({'marker': all_lbs[55].get('id'), 'limit': '8'})
resp = self.get(self.LBS_PATH, headers=self.headers, params=params)
lbs = json.loads(resp.body)
fnd_name_descs = [(lb['name'], lb['description']) for lb in lbs]
self.assertEqual(exp_order[56:64], fnd_name_descs)
# Test that we don't get an overflow or some other error if
# the number of entries is less then the limit.
# This should only return 4 entries
params.update({'marker': all_lbs[59].get('id'), 'limit': '8'})
resp = self.get(self.LBS_PATH, headers=self.headers, params=params)
lbs = json.loads(resp.body)
fnd_name_descs = [(lb['name'], lb['description']) for lb in lbs]
self.assertEqual(exp_order[60:64], fnd_name_descs)
def test_listeners_sorting_and_pagination(self):
# Create a loadbalancer and create 2 listeners on it
lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()}, name="single_lb")
lb_id = lb['id']
self.set_lb_status(lb_id)
exp_desc_names = self.random_name_desc[30:40]
exp_desc_names.sort(key=operator.itemgetter(0), reverse=True)
exp_desc_names.sort(key=operator.itemgetter(1), reverse=True)
port = 0
# We did some heavy testing already and the set_lb_status function
# is recursive and leads to n*(n-1) iterations during this test so
# we only test 10 entries
for (name, description) in self.random_name_desc[30:40]:
port += 1
opts = {"name": name, "description": description}
self.create_listener(lb_id, constants.PROTOCOL_HTTP, port, **opts)
# Set the lb to active but don't recurse the child objects as
# that will create a n*(n-1) operation in this loop
self.set_lb_status(lb_id)
url = self.LISTENERS_PATH.format(lb_id=lb_id)
params = {'sort': 'description:desc,name:desc',
'project_id': self.project_id}
# Get all listeners
resp = self.get(url, headers=self.headers, params=params)
all_listeners = json.loads(resp.body)
# Test the slice at 3:6
params.update({'marker': all_listeners[2].get('id'), 'limit': '3'})
resp = self.get(url, headers=self.headers, params=params)
listeners = json.loads(resp.body)
fnd_name_desc = [(l['name'], l['description']) for l in listeners]
self.assertEqual(exp_desc_names[3:6], fnd_name_desc)
# Test the slice at 1:8
params.update({'marker': all_listeners[0].get('id'), 'limit': '7'})
resp = self.get(url, headers=self.headers, params=params)
listeners = json.loads(resp.body)
fnd_name_desc = [(l['name'], l['description']) for l in listeners]
self.assertEqual(exp_desc_names[1:8], fnd_name_desc)
def test_members_sorting_and_pagination(self):
lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()}, name="single_lb")
lb_id = lb['id']
self.set_lb_status(lb_id)
li = self.create_listener(lb_id, constants.PROTOCOL_HTTP, 80)
li_id = li['id']
self.set_lb_status(lb_id)
p = self.create_pool(lb_id, li_id, constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.set_lb_status(lb_id)
pool_id = p['id']
exp_ip_weights = [('127.0.0.4', 3), ('127.0.0.5', 1), ('127.0.0.2', 5),
('127.0.0.1', 4), ('127.0.0.3', 2)]
for(ip, weight) in exp_ip_weights:
self.create_member(lb_id, pool_id, ip, 80, weight=weight)
self.set_lb_status(lb_id)
exp_ip_weights.sort(key=operator.itemgetter(1))
exp_ip_weights.sort(key=operator.itemgetter(0))
url = self.MEMBERS_PATH.format(lb_id=lb_id, pool_id=pool_id)
params = {'sort': 'ip_address,weight:asc',
'project_id': self.project_id}
# Get all members
resp = self.get(url, headers=self.headers, params=params)
all_members = json.loads(resp.body)
# These tests are getting exhaustive -- just test marker=0 limit=2
params.update({'marker': all_members[0].get('id'), 'limit': '2'})
resp = self.get(url, headers=self.headers, params=params)
members = json.loads(resp.body)
fnd_ip_subs = [(m['ip_address'], m['weight']) for m in members]
self.assertEqual(exp_ip_weights[1:3], fnd_ip_subs)
def test_invalid_limit(self):
params = {'project_id': self.project_id,
'limit': 'a'}
self.get(self.LBS_PATH, headers=self.headers, params=params,
status=400)
def test_invalid_marker(self):
params = {'project_id': self.project_id,
'marker': 'not_a_valid_uuid'}
self.get(self.LBS_PATH, headers=self.headers, params=params,
status=400)
def test_invalid_sort_key(self):
params = {'sort': 'name:desc:asc',
'project_id': self.project_id}
self.get(self.LBS_PATH, headers=self.headers, params=params,
status=400)

View File

@ -1,314 +0,0 @@
# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_utils import uuidutils
from octavia.common import constants
from octavia.tests.functional.api.v1 import base
class TestHealthMonitor(base.BaseAPITest):
def setUp(self):
super(TestHealthMonitor, self).setUp()
self.lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()})
self.set_lb_status(self.lb.get('id'))
self.listener = self.create_listener(self.lb.get('id'),
constants.PROTOCOL_HTTP, 80)
self.set_lb_status(self.lb.get('id'))
self.pool = self.create_pool_sans_listener(
self.lb.get('id'), constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.set_lb_status(self.lb.get('id'))
self.pool_with_listener = self.create_pool(
self.lb.get('id'),
self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.set_lb_status(self.lb.get('id'))
self.hm_path = self.HM_PATH.format(lb_id=self.lb.get('id'),
pool_id=self.pool.get('id'))
self.deprecated_hm_path = self.DEPRECATED_HM_PATH.format(
lb_id=self.lb.get('id'), listener_id=self.listener.get('id'),
pool_id=self.pool_with_listener.get('id'))
def test_get(self):
api_hm = self.create_health_monitor(self.lb.get('id'),
self.pool.get('id'),
constants.HEALTH_MONITOR_HTTP,
1, 1, 1, 1)
self.set_lb_status(lb_id=self.lb.get('id'))
response = self.get(self.hm_path)
response_body = response.json
self.assertEqual(api_hm, response_body)
def test_bad_get(self):
self.get(self.hm_path, status=404)
def test_create_sans_listener(self):
api_hm = self.create_health_monitor(self.lb.get('id'),
self.pool.get('id'),
constants.HEALTH_MONITOR_HTTP,
1, 1, 1, 1)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
self.assertEqual(constants.HEALTH_MONITOR_HTTP, api_hm.get('type'))
self.assertEqual(1, api_hm.get('delay'))
self.assertEqual(1, api_hm.get('timeout'))
self.assertEqual(1, api_hm.get('fall_threshold'))
self.assertEqual(1, api_hm.get('rise_threshold'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_create_with_listener(self):
api_hm = self.create_health_monitor_with_listener(
self.lb.get('id'), self.listener.get('id'),
self.pool_with_listener.get('id'),
constants.HEALTH_MONITOR_HTTP, 1, 1, 1, 1)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
self.assertEqual(constants.HEALTH_MONITOR_HTTP, api_hm.get('type'))
self.assertEqual(1, api_hm.get('delay'))
self.assertEqual(1, api_hm.get('timeout'))
self.assertEqual(1, api_hm.get('fall_threshold'))
self.assertEqual(1, api_hm.get('rise_threshold'))
# Verify optional field defaults
self.assertEqual('GET', api_hm.get('http_method'))
self.assertEqual('/', api_hm.get('url_path'))
self.assertEqual('200', api_hm.get('expected_codes'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_create_with_project_id(self):
pid = uuidutils.generate_uuid()
api_hm = self.create_health_monitor(self.lb.get('id'),
self.pool.get('id'),
constants.HEALTH_MONITOR_HTTP,
1, 1, 1, 1, project_id=pid)
self.assertEqual(self.project_id, api_hm.get('project_id'))
def test_create_over_quota(self):
self.check_quota_met_true_mock.start()
self.addCleanup(self.check_quota_met_true_mock.stop)
self.post(self.hm_path,
body={'type': constants.HEALTH_MONITOR_HTTP,
'delay': 1, 'timeout': 1, 'fall_threshold': 1,
'rise_threshold': 1, 'project_id': self.project_id},
status=403)
def test_bad_create(self):
hm_json = {'name': 'test1'}
self.post(self.deprecated_hm_path, hm_json, status=400)
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_create_with_bad_handler(self):
self.handler_mock().health_monitor.create.side_effect = Exception()
self.create_health_monitor_with_listener(
self.lb.get('id'), self.listener.get('id'),
self.pool_with_listener.get('id'),
constants.HEALTH_MONITOR_HTTP, 1, 1, 1, 1)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ERROR)
def test_duplicate_create(self):
api_hm = self.create_health_monitor(self.lb.get('id'),
self.pool.get('id'),
constants.HEALTH_MONITOR_HTTP,
1, 1, 1, 1)
self.set_lb_status(lb_id=self.lb.get('id'))
self.post(self.hm_path, api_hm, status=409)
def test_update(self):
self.create_health_monitor_with_listener(
self.lb.get('id'), self.listener.get('id'),
self.pool_with_listener.get('id'),
constants.HEALTH_MONITOR_HTTP, 1, 1, 1, 1)
self.set_lb_status(lb_id=self.lb.get('id'))
new_hm = {'type': constants.HEALTH_MONITOR_HTTPS}
self.put(self.deprecated_hm_path, new_hm)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_bad_update(self):
self.skip("This test will need reviewed after a validation layer is "
"built")
self.create_health_monitor(self.lb.get('id'),
self.pool.get('id'),
constants.HEALTH_MONITOR_HTTP,
1, 1, 1, 1)
new_hm = {'type': 'bad_type', 'delay': 2}
self.set_lb_status(self.lb.get('id'))
self.put(self.hm_path, new_hm, status=400)
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_update_with_bad_handler(self):
self.create_health_monitor_with_listener(
self.lb.get('id'), self.listener.get('id'),
self.pool_with_listener.get('id'),
constants.HEALTH_MONITOR_HTTP, 1, 1, 1, 1)
self.set_lb_status(lb_id=self.lb.get('id'))
new_hm = {'type': constants.HEALTH_MONITOR_HTTPS}
self.handler_mock().health_monitor.update.side_effect = Exception()
self.put(self.deprecated_hm_path, new_hm)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ERROR)
def test_delete(self):
api_hm = self.create_health_monitor_with_listener(
self.lb.get('id'), self.listener.get('id'),
self.pool_with_listener.get('id'),
constants.HEALTH_MONITOR_HTTP, 1, 1, 1, 1)
self.set_lb_status(lb_id=self.lb.get('id'))
response = self.get(self.deprecated_hm_path)
self.assertEqual(api_hm, response.json)
self.delete(self.deprecated_hm_path)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_bad_delete(self):
self.delete(self.hm_path, status=404)
def test_delete_with_bad_handler(self):
api_hm = self.create_health_monitor_with_listener(
self.lb.get('id'), self.listener.get('id'),
self.pool_with_listener.get('id'),
constants.HEALTH_MONITOR_HTTP, 1, 1, 1, 1)
self.set_lb_status(lb_id=self.lb.get('id'))
response = self.get(self.deprecated_hm_path)
self.assertEqual(api_hm, response.json)
self.handler_mock().health_monitor.delete.side_effect = Exception()
self.delete(self.deprecated_hm_path)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ERROR)
def test_create_when_lb_pending_update(self):
self.put(self.LB_PATH.format(lb_id=self.lb.get('id')),
body={'name': 'test_name_change'})
self.post(self.hm_path,
body={'type': constants.HEALTH_MONITOR_HTTP,
'delay': 1, 'timeout': 1, 'fall_threshold': 1,
'rise_threshold': 1, 'project_id': self.project_id},
status=409)
def test_update_when_lb_pending_update(self):
self.create_health_monitor(self.lb.get('id'), self.pool.get('id'),
constants.HEALTH_MONITOR_HTTP, 1, 1, 1, 1)
self.set_lb_status(self.lb.get('id'))
self.put(self.LB_PATH.format(lb_id=self.lb.get('id')),
body={'name': 'test_name_change'})
self.put(self.hm_path, body={'rise_threshold': 2}, status=409)
def test_delete_when_lb_pending_update(self):
self.create_health_monitor(self.lb.get('id'), self.pool.get('id'),
constants.HEALTH_MONITOR_HTTP, 1, 1, 1, 1)
self.set_lb_status(self.lb.get('id'))
self.put(self.LB_PATH.format(lb_id=self.lb.get('id')),
body={'name': 'test_name_change'})
self.delete(self.hm_path, status=409)
def test_create_when_lb_pending_delete(self):
self.delete(self.LB_DELETE_CASCADE_PATH.format(
lb_id=self.lb.get('id')))
self.post(self.hm_path,
body={'type': constants.HEALTH_MONITOR_HTTP,
'delay': 1, 'timeout': 1, 'fall_threshold': 1,
'rise_threshold': 1, 'project_id': self.project_id},
status=409)
def test_update_when_lb_pending_delete(self):
self.create_health_monitor(self.lb.get('id'), self.pool.get('id'),
constants.HEALTH_MONITOR_HTTP, 1, 1, 1, 1)
self.set_lb_status(self.lb.get('id'))
self.delete(self.LB_DELETE_CASCADE_PATH.format(
lb_id=self.lb.get('id')))
self.put(self.hm_path, body={'rise_threshold': 2}, status=409)
def test_delete_when_lb_pending_delete(self):
self.create_health_monitor(self.lb.get('id'), self.pool.get('id'),
constants.HEALTH_MONITOR_HTTP, 1, 1, 1, 1)
self.set_lb_status(self.lb.get('id'))
self.delete(self.LB_DELETE_CASCADE_PATH.format(
lb_id=self.lb.get('id')))
self.delete(self.hm_path, status=409)

View File

@ -1,425 +0,0 @@
# Copyright 2016 Blue Box, an IBM Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_utils import uuidutils
from octavia.common import constants
from octavia.tests.functional.api.v1 import base
class TestL7Policy(base.BaseAPITest):
def setUp(self):
super(TestL7Policy, self).setUp()
self.lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()})
self.set_lb_status(self.lb.get('id'))
self.listener = self.create_listener(self.lb.get('id'),
constants.PROTOCOL_HTTP, 80)
self.set_lb_status(self.lb.get('id'))
self.pool = self.create_pool_sans_listener(
self.lb.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.set_lb_status(self.lb.get('id'))
self.l7policies_path = self.L7POLICIES_PATH.format(
lb_id=self.lb.get('id'), listener_id=self.listener.get('id'))
self.l7policy_path = self.l7policies_path + '/{l7policy_id}'
def test_get(self):
api_l7policy = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT)
response = self.get(self.l7policy_path.format(
l7policy_id=api_l7policy.get('id')))
response_body = response.json
self.assertEqual(api_l7policy, response_body)
def test_bad_get(self):
self.get(self.l7policy_path.format(
l7policy_id=uuidutils.generate_uuid()), status=404)
def test_get_all(self):
api_l7p_a = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT)
self.set_lb_status(self.lb.get('id'))
api_l7p_c = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT)
self.set_lb_status(self.lb.get('id'))
api_l7p_b = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT, position=2)
self.set_lb_status(self.lb.get('id'))
# api_l7p_b was inserted before api_l7p_c
api_l7p_c['position'] = 3
response = self.get(self.l7policies_path)
response_body = response.json
self.assertIsInstance(response_body, list)
self.assertEqual(3, len(response_body))
self.assertEqual(api_l7p_a, response_body[0])
self.assertEqual(api_l7p_b, response_body[1])
self.assertEqual(api_l7p_c, response_body[2])
def test_empty_get_all(self):
response = self.get(self.l7policies_path)
response_body = response.json
self.assertIsInstance(response_body, list)
self.assertEqual(0, len(response_body))
def test_create_reject_policy(self):
api_l7policy = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT)
self.assertEqual(constants.L7POLICY_ACTION_REJECT,
api_l7policy.get('action'))
self.assertEqual(1, api_l7policy.get('position'))
self.assertIsNone(api_l7policy.get('redirect_pool_id'))
self.assertIsNone(api_l7policy.get('redirect_url'))
self.assertTrue(api_l7policy.get('enabled'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_create_redirect_to_pool(self):
api_l7policy = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REDIRECT_TO_POOL,
redirect_pool_id=self.pool.get('id'))
self.assertEqual(constants.L7POLICY_ACTION_REDIRECT_TO_POOL,
api_l7policy.get('action'))
self.assertEqual(1, api_l7policy.get('position'))
self.assertEqual(self.pool.get('id'),
api_l7policy.get('redirect_pool_id'))
self.assertIsNone(api_l7policy.get('redirect_url'))
self.assertTrue(api_l7policy.get('enabled'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_create_redirect_to_url(self):
api_l7policy = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REDIRECT_TO_URL,
redirect_url='http://www.example.com')
self.assertEqual(constants.L7POLICY_ACTION_REDIRECT_TO_URL,
api_l7policy.get('action'))
self.assertEqual(1, api_l7policy.get('position'))
self.assertIsNone(api_l7policy.get('redirect_pool_id'))
self.assertEqual('http://www.example.com',
api_l7policy.get('redirect_url'))
self.assertTrue(api_l7policy.get('enabled'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_create_with_id(self):
l7p_id = uuidutils.generate_uuid()
api_l7policy = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT, id=l7p_id)
self.assertEqual(l7p_id, api_l7policy.get('id'))
def test_create_with_duplicate_id(self):
l7policy = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT)
self.set_lb_status(self.lb.get('id'), constants.ACTIVE)
path = self.L7POLICIES_PATH.format(lb_id=self.lb.get('id'),
listener_id=self.listener.get('id'))
body = {'id': l7policy.get('id'),
'action': constants.L7POLICY_ACTION_REJECT}
self.post(path, body, status=409)
def test_bad_create(self):
l7policy = {'name': 'test1'}
self.post(self.l7policies_path, l7policy, status=400)
def test_bad_create_redirect_to_pool(self):
l7policy = {'action': constants.L7POLICY_ACTION_REDIRECT_TO_POOL,
'redirect_pool_id': uuidutils.generate_uuid()}
self.post(self.l7policies_path, l7policy, status=404)
def test_bad_create_redirect_to_url(self):
l7policy = {'action': constants.L7POLICY_ACTION_REDIRECT_TO_URL,
'redirect_url': 'bad url'}
self.post(self.l7policies_path, l7policy, status=400)
def test_create_with_bad_handler(self):
self.handler_mock().l7policy.create.side_effect = Exception()
self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ERROR)
def test_update(self):
api_l7policy = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT)
self.set_lb_status(self.lb.get('id'))
new_l7policy = {'action': constants.L7POLICY_ACTION_REDIRECT_TO_URL,
'redirect_url': 'http://www.example.com'}
response = self.put(self.l7policy_path.format(
l7policy_id=api_l7policy.get('id')), new_l7policy, status=202)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
response_body = response.json
self.assertEqual(constants.L7POLICY_ACTION_REJECT,
response_body.get('action'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_bad_update(self):
api_l7policy = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT)
new_l7policy = {'action': 'bad action'}
self.put(self.l7policy_path.format(l7policy_id=api_l7policy.get('id')),
new_l7policy, status=400)
def test_bad_update_redirect_to_pool(self):
api_l7policy = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT)
new_l7policy = {'action': constants.L7POLICY_ACTION_REDIRECT_TO_POOL,
'redirect_pool_id': uuidutils.generate_uuid()}
self.put(self.l7policy_path.format(l7policy_id=api_l7policy.get('id')),
new_l7policy, status=404)
def test_bad_update_redirect_to_url(self):
api_l7policy = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT)
new_l7policy = {'action': constants.L7POLICY_ACTION_REDIRECT_TO_URL,
'redirect_url': 'bad url'}
self.put(self.l7policy_path.format(l7policy_id=api_l7policy.get('id')),
new_l7policy, status=400)
def test_update_with_bad_handler(self):
api_l7policy = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT)
self.set_lb_status(self.lb.get('id'))
new_l7policy = {'action': constants.L7POLICY_ACTION_REDIRECT_TO_URL,
'redirect_url': 'http://www.example.com'}
self.handler_mock().l7policy.update.side_effect = Exception()
self.put(self.l7policy_path.format(
l7policy_id=api_l7policy.get('id')), new_l7policy, status=202)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ERROR)
def test_update_redirect_to_pool_bad_pool_id(self):
api_l7policy = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT)
self.set_lb_status(self.lb.get('id'))
path = self.l7policy_path.format(l7policy_id=api_l7policy.get('id'))
new_l7policy = {'redirect_pool_id': uuidutils.generate_uuid()}
self.put(path, new_l7policy, status=404)
def test_update_redirect_to_pool_minimal(self):
api_l7policy = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT)
self.set_lb_status(self.lb.get('id'))
path = self.l7policy_path.format(l7policy_id=api_l7policy.get('id'))
new_l7policy = {'redirect_pool_id': self.pool.get('id')}
self.put(path, new_l7policy, status=202)
def test_update_redirect_to_url_bad_url(self):
api_l7policy = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT)
self.set_lb_status(self.lb.get('id'))
path = self.l7policy_path.format(l7policy_id=api_l7policy.get('id'))
new_l7policy = {'redirect_url': 'bad-url'}
self.put(path, new_l7policy, status=400)
def test_update_redirect_to_url_minimal(self):
api_l7policy = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT)
self.set_lb_status(self.lb.get('id'))
path = self.l7policy_path.format(l7policy_id=api_l7policy.get('id'))
new_l7policy = {'redirect_url': 'http://www.example.com/'}
self.put(path, new_l7policy, status=202)
def test_delete(self):
api_l7policy = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT)
self.set_lb_status(self.lb.get('id'))
response = self.get(self.l7policy_path.format(
l7policy_id=api_l7policy.get('id')))
self.assertEqual(api_l7policy, response.json)
self.delete(self.l7policy_path.format(
l7policy_id=api_l7policy.get('id')))
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_bad_delete(self):
self.delete(self.l7policy_path.format(
l7policy_id=uuidutils.generate_uuid()), status=404)
def test_delete_with_bad_handler(self):
api_l7policy = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT)
self.set_lb_status(self.lb.get('id'))
response = self.get(self.l7policy_path.format(
l7policy_id=api_l7policy.get('id')))
self.assertEqual(api_l7policy, response.json)
self.handler_mock().l7policy.delete.side_effect = Exception()
self.delete(self.l7policy_path.format(
l7policy_id=api_l7policy.get('id')))
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ERROR)
def test_create_when_lb_pending_update(self):
self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT)
self.set_lb_status(self.lb.get('id'))
self.put(self.LB_PATH.format(lb_id=self.lb.get('id')),
body={'name': 'test_name_change'})
new_l7policy = {'action': constants.L7POLICY_ACTION_REDIRECT_TO_URL,
'redirect_url': 'http://www.example.com'}
self.post(self.l7policies_path, body=new_l7policy, status=409)
def test_update_when_lb_pending_update(self):
l7policy = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT)
self.set_lb_status(self.lb.get('id'))
self.put(self.LB_PATH.format(lb_id=self.lb.get('id')),
body={'name': 'test_name_change'})
new_l7policy = {'action': constants.L7POLICY_ACTION_REDIRECT_TO_URL,
'redirect_url': 'http://www.example.com'}
self.put(self.l7policy_path.format(l7policy_id=l7policy.get('id')),
body=new_l7policy, status=409)
def test_delete_when_lb_pending_update(self):
l7policy = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT)
self.set_lb_status(self.lb.get('id'))
self.put(self.LB_PATH.format(lb_id=self.lb.get('id')),
body={'name': 'test_name_change'})
self.delete(self.l7policy_path.format(l7policy_id=l7policy.get('id')),
status=409)
def test_create_when_lb_pending_delete(self):
self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT)
self.set_lb_status(self.lb.get('id'))
self.delete(self.LB_DELETE_CASCADE_PATH.format(
lb_id=self.lb.get('id')))
new_l7policy = {'action': constants.L7POLICY_ACTION_REDIRECT_TO_URL,
'redirect_url': 'http://www.example.com'}
self.post(self.l7policies_path, body=new_l7policy, status=409)
def test_update_when_lb_pending_delete(self):
l7policy = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT)
self.set_lb_status(self.lb.get('id'))
self.delete(self.LB_DELETE_CASCADE_PATH.format(
lb_id=self.lb.get('id')))
new_l7policy = {'action': constants.L7POLICY_ACTION_REDIRECT_TO_URL,
'redirect_url': 'http://www.example.com'}
self.put(self.l7policy_path.format(l7policy_id=l7policy.get('id')),
body=new_l7policy, status=409)
def test_delete_when_lb_pending_delete(self):
l7policy = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT)
self.set_lb_status(self.lb.get('id'))
self.delete(self.LB_DELETE_CASCADE_PATH.format(
lb_id=self.lb.get('id')))
self.delete(self.l7policy_path.format(l7policy_id=l7policy.get('id')),
status=409)

View File

@ -1,499 +0,0 @@
# Copyright 2016 Blue Box, an IBM Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_utils import uuidutils
from octavia.common import constants
from octavia.tests.functional.api.v1 import base
class TestL7Rule(base.BaseAPITest):
def setUp(self):
super(TestL7Rule, self).setUp()
self.lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()})
self.set_lb_status(self.lb.get('id'))
self.listener = self.create_listener(self.lb.get('id'),
constants.PROTOCOL_HTTP, 80)
self.set_lb_status(self.lb.get('id'))
self.l7policy = self.create_l7policy(
self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REJECT)
self.set_lb_status(self.lb.get('id'))
self.l7rules_path = self.L7RULES_PATH.format(
lb_id=self.lb.get('id'), listener_id=self.listener.get('id'),
l7policy_id=self.l7policy.get('id'))
self.l7rule_path = self.l7rules_path + '/{l7rule_id}'
def test_get(self):
l7rule = self.create_l7rule(
self.lb.get('id'), self.listener.get('id'),
self.l7policy.get('id'), constants.L7RULE_TYPE_PATH,
constants.L7RULE_COMPARE_TYPE_STARTS_WITH, '/api')
response = self.get(self.l7rule_path.format(
l7rule_id=l7rule.get('id')))
response_body = response.json
self.assertEqual(l7rule, response_body)
def test_get_bad_parent_policy(self):
bad_path = (self.L7RULES_PATH.format(
lb_id=self.lb.get('id'), listener_id=self.listener.get('id'),
l7policy_id=uuidutils.generate_uuid()) + '/' +
uuidutils.generate_uuid())
self.get(bad_path, status=404)
def test_bad_get(self):
self.get(self.l7rule_path.format(
l7rule_id=uuidutils.generate_uuid()), status=404)
def test_get_all(self):
api_l7r_a = self.create_l7rule(
self.lb.get('id'), self.listener.get('id'),
self.l7policy.get('id'), constants.L7RULE_TYPE_PATH,
constants.L7RULE_COMPARE_TYPE_STARTS_WITH, '/api')
self.set_lb_status(self.lb.get('id'))
api_l7r_b = self.create_l7rule(
self.lb.get('id'), self.listener.get('id'),
self.l7policy.get('id'), constants.L7RULE_TYPE_PATH,
constants.L7RULE_COMPARE_TYPE_STARTS_WITH, '/images')
self.set_lb_status(self.lb.get('id'))
response = self.get(self.l7rules_path)
response_body = response.json
self.assertIsInstance(response_body, list)
self.assertEqual(2, len(response_body))
self.assertIn(api_l7r_a, response_body)
self.assertIn(api_l7r_b, response_body)
def test_empty_get_all(self):
response = self.get(self.l7rules_path)
response_body = response.json
self.assertIsInstance(response_body, list)
self.assertEqual(0, len(response_body))
def test_create_host_name_rule(self):
l7rule = self.create_l7rule(
self.lb.get('id'), self.listener.get('id'),
self.l7policy.get('id'), constants.L7RULE_TYPE_HOST_NAME,
constants.L7RULE_COMPARE_TYPE_EQUAL_TO, 'www.example.com')
self.assertEqual(constants.L7RULE_TYPE_HOST_NAME, l7rule.get('type'))
self.assertEqual(constants.L7RULE_COMPARE_TYPE_EQUAL_TO,
l7rule.get('compare_type'))
self.assertEqual('www.example.com', l7rule.get('value'))
self.assertIsNone(l7rule.get('key'))
self.assertFalse(l7rule.get('invert'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_create_path_rule(self):
l7rule = self.create_l7rule(
self.lb.get('id'), self.listener.get('id'),
self.l7policy.get('id'), constants.L7RULE_TYPE_PATH,
constants.L7RULE_COMPARE_TYPE_STARTS_WITH, '/api',
invert=True)
self.assertEqual(constants.L7RULE_TYPE_PATH, l7rule.get('type'))
self.assertEqual(constants.L7RULE_COMPARE_TYPE_STARTS_WITH,
l7rule.get('compare_type'))
self.assertEqual('/api', l7rule.get('value'))
self.assertIsNone(l7rule.get('key'))
self.assertTrue(l7rule.get('invert'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_create_file_type_rule(self):
l7rule = self.create_l7rule(
self.lb.get('id'), self.listener.get('id'),
self.l7policy.get('id'), constants.L7RULE_TYPE_FILE_TYPE,
constants.L7RULE_COMPARE_TYPE_REGEX, 'jpg|png')
self.assertEqual(constants.L7RULE_TYPE_FILE_TYPE, l7rule.get('type'))
self.assertEqual(constants.L7RULE_COMPARE_TYPE_REGEX,
l7rule.get('compare_type'))
self.assertEqual('jpg|png', l7rule.get('value'))
self.assertIsNone(l7rule.get('key'))
self.assertFalse(l7rule.get('invert'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_create_header_rule(self):
l7rule = self.create_l7rule(
self.lb.get('id'), self.listener.get('id'),
self.l7policy.get('id'), constants.L7RULE_TYPE_HEADER,
constants.L7RULE_COMPARE_TYPE_ENDS_WITH, '"some string"',
key='Some-header')
self.assertEqual(constants.L7RULE_TYPE_HEADER, l7rule.get('type'))
self.assertEqual(constants.L7RULE_COMPARE_TYPE_ENDS_WITH,
l7rule.get('compare_type'))
self.assertEqual('"some string"', l7rule.get('value'))
self.assertEqual('Some-header', l7rule.get('key'))
self.assertFalse(l7rule.get('invert'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_create_cookie_rule(self):
l7rule = self.create_l7rule(
self.lb.get('id'), self.listener.get('id'),
self.l7policy.get('id'), constants.L7RULE_TYPE_COOKIE,
constants.L7RULE_COMPARE_TYPE_CONTAINS, 'some-value',
key='some-cookie')
self.assertEqual(constants.L7RULE_TYPE_COOKIE, l7rule.get('type'))
self.assertEqual(constants.L7RULE_COMPARE_TYPE_CONTAINS,
l7rule.get('compare_type'))
self.assertEqual('some-value', l7rule.get('value'))
self.assertEqual('some-cookie', l7rule.get('key'))
self.assertFalse(l7rule.get('invert'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_create_with_id(self):
l7r_id = uuidutils.generate_uuid()
l7rule = self.create_l7rule(
self.lb.get('id'), self.listener.get('id'),
self.l7policy.get('id'), constants.L7RULE_TYPE_PATH,
constants.L7RULE_COMPARE_TYPE_STARTS_WITH, '/api', id=l7r_id)
self.assertEqual(l7r_id, l7rule.get('id'))
def test_create_with_duplicate_id(self):
l7rule = self.create_l7rule(
self.lb.get('id'), self.listener.get('id'),
self.l7policy.get('id'), constants.L7RULE_TYPE_PATH,
constants.L7RULE_COMPARE_TYPE_STARTS_WITH, '/api')
self.set_lb_status(self.lb.get('id'), constants.ACTIVE)
path = self.L7RULES_PATH.format(lb_id=self.lb.get('id'),
listener_id=self.listener.get('id'),
l7policy_id=self.l7policy.get('id'))
body = {'id': l7rule.get('id'),
'type': constants.L7RULE_TYPE_PATH,
'compare_type': constants.L7RULE_COMPARE_TYPE_STARTS_WITH,
'value': '/api'}
self.post(path, body, status=409)
def test_create_too_many_rules(self):
for i in range(0, constants.MAX_L7RULES_PER_L7POLICY):
self.create_l7rule(
self.lb.get('id'), self.listener.get('id'),
self.l7policy.get('id'), constants.L7RULE_TYPE_PATH,
constants.L7RULE_COMPARE_TYPE_STARTS_WITH, '/api')
self.set_lb_status(self.lb.get('id'), constants.ACTIVE)
body = {'type': constants.L7RULE_TYPE_PATH,
'compare_type': constants.L7RULE_COMPARE_TYPE_STARTS_WITH,
'value': '/api'}
self.post(self.l7rules_path, body, status=409)
def test_bad_create(self):
l7rule = {'name': 'test1'}
self.post(self.l7rules_path, l7rule, status=400)
def test_bad_create_host_name_rule(self):
l7rule = {'type': constants.L7RULE_TYPE_HOST_NAME,
'compare_type': constants.L7RULE_COMPARE_TYPE_STARTS_WITH}
self.post(self.l7rules_path, l7rule, status=400)
def test_bad_create_path_rule(self):
l7rule = {'type': constants.L7RULE_TYPE_PATH,
'compare_type': constants.L7RULE_COMPARE_TYPE_REGEX,
'value': 'bad string\\'}
self.post(self.l7rules_path, l7rule, status=400)
def test_bad_create_file_type_rule(self):
l7rule = {'type': constants.L7RULE_TYPE_FILE_TYPE,
'compare_type': constants.L7RULE_COMPARE_TYPE_STARTS_WITH,
'value': 'png'}
self.post(self.l7rules_path, l7rule, status=400)
def test_bad_create_header_rule(self):
l7rule = {'type': constants.L7RULE_TYPE_HEADER,
'compare_type': constants.L7RULE_COMPARE_TYPE_CONTAINS,
'value': 'some-string'}
self.post(self.l7rules_path, l7rule, status=400)
def test_bad_create_cookie_rule(self):
l7rule = {'type': constants.L7RULE_TYPE_COOKIE,
'compare_type': constants.L7RULE_COMPARE_TYPE_EQUAL_TO,
'key': 'bad cookie name',
'value': 'some-string'}
self.post(self.l7rules_path, l7rule, status=400)
def test_create_with_bad_handler(self):
self.handler_mock().l7rule.create.side_effect = Exception()
self.create_l7rule(
self.lb.get('id'), self.listener.get('id'),
self.l7policy.get('id'), constants.L7RULE_TYPE_PATH,
constants.L7RULE_COMPARE_TYPE_STARTS_WITH, '/api')
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ERROR)
def test_update(self):
l7rule = self.create_l7rule(
self.lb.get('id'), self.listener.get('id'),
self.l7policy.get('id'), constants.L7RULE_TYPE_PATH,
constants.L7RULE_COMPARE_TYPE_STARTS_WITH, '/api')
self.set_lb_status(self.lb.get('id'))
new_l7rule = {'value': '/images'}
response = self.put(self.l7rule_path.format(
l7rule_id=l7rule.get('id')), new_l7rule, status=202)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
response_body = response.json
self.assertEqual('/api', response_body.get('value'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_bad_update(self):
l7rule = self.create_l7rule(
self.lb.get('id'), self.listener.get('id'),
self.l7policy.get('id'), constants.L7RULE_TYPE_PATH,
constants.L7RULE_COMPARE_TYPE_STARTS_WITH, '/api')
new_l7rule = {'type': 'bad type'}
self.put(self.l7rule_path.format(l7rule_id=l7rule.get('id')),
new_l7rule, expect_errors=True)
def test_update_with_bad_handler(self):
l7rule = self.create_l7rule(
self.lb.get('id'), self.listener.get('id'),
self.l7policy.get('id'), constants.L7RULE_TYPE_PATH,
constants.L7RULE_COMPARE_TYPE_STARTS_WITH, '/api')
self.set_lb_status(self.lb.get('id'))
new_l7rule = {'value': '/images'}
self.handler_mock().l7rule.update.side_effect = Exception()
self.put(self.l7rule_path.format(
l7rule_id=l7rule.get('id')), new_l7rule, status=202)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ERROR)
def test_update_with_invalid_rule(self):
l7rule = self.create_l7rule(
self.lb.get('id'), self.listener.get('id'),
self.l7policy.get('id'), constants.L7RULE_TYPE_PATH,
constants.L7RULE_COMPARE_TYPE_STARTS_WITH, '/api')
self.set_lb_status(self.lb.get('id'))
new_l7rule = {'compare_type': constants.L7RULE_COMPARE_TYPE_REGEX,
'value': 'bad string\\'}
self.put(self.l7rule_path.format(
l7rule_id=l7rule.get('id')), new_l7rule, status=400)
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE,
constants.ONLINE)
def test_delete(self):
l7rule = self.create_l7rule(
self.lb.get('id'), self.listener.get('id'),
self.l7policy.get('id'), constants.L7RULE_TYPE_PATH,
constants.L7RULE_COMPARE_TYPE_STARTS_WITH, '/api')
self.set_lb_status(self.lb.get('id'))
response = self.get(self.l7rule_path.format(
l7rule_id=l7rule.get('id')))
self.assertEqual(l7rule, response.json)
self.delete(self.l7rule_path.format(l7rule_id=l7rule.get('id')))
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_bad_delete(self):
self.delete(self.l7rule_path.format(
l7rule_id=uuidutils.generate_uuid()), status=404)
def test_delete_with_bad_handler(self):
l7rule = self.create_l7rule(
self.lb.get('id'), self.listener.get('id'),
self.l7policy.get('id'), constants.L7RULE_TYPE_PATH,
constants.L7RULE_COMPARE_TYPE_STARTS_WITH, '/api')
self.set_lb_status(self.lb.get('id'))
response = self.get(self.l7rule_path.format(
l7rule_id=l7rule.get('id')))
self.assertEqual(l7rule, response.json)
self.handler_mock().l7rule.delete.side_effect = Exception()
self.delete(self.l7rule_path.format(
l7rule_id=l7rule.get('id')))
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ERROR)
def test_create_when_lb_pending_update(self):
self.create_l7rule(
self.lb.get('id'), self.listener.get('id'),
self.l7policy.get('id'), constants.L7RULE_TYPE_PATH,
constants.L7RULE_COMPARE_TYPE_STARTS_WITH, '/api')
self.set_lb_status(self.lb.get('id'))
self.put(self.LB_PATH.format(lb_id=self.lb.get('id')),
body={'name': 'test_name_change'})
new_l7rule = {'type': constants.L7RULE_TYPE_PATH,
'compare_type': constants.L7RULE_COMPARE_TYPE_EQUAL_TO,
'value': '/api'}
self.post(self.l7rules_path, body=new_l7rule, status=409)
def test_update_when_lb_pending_update(self):
l7rule = self.create_l7rule(
self.lb.get('id'), self.listener.get('id'),
self.l7policy.get('id'), constants.L7RULE_TYPE_PATH,
constants.L7RULE_COMPARE_TYPE_STARTS_WITH, '/api')
self.set_lb_status(self.lb.get('id'))
self.put(self.LB_PATH.format(lb_id=self.lb.get('id')),
body={'name': 'test_name_change'})
new_l7rule = {'type': constants.L7RULE_TYPE_HOST_NAME,
'compare_type': constants.L7RULE_COMPARE_TYPE_REGEX,
'value': '.*.example.com'}
self.put(self.l7rule_path.format(l7rule_id=l7rule.get('id')),
body=new_l7rule, status=409)
def test_delete_when_lb_pending_update(self):
l7rule = self.create_l7rule(
self.lb.get('id'), self.listener.get('id'),
self.l7policy.get('id'), constants.L7RULE_TYPE_PATH,
constants.L7RULE_COMPARE_TYPE_STARTS_WITH, '/api')
self.set_lb_status(self.lb.get('id'))
self.put(self.LB_PATH.format(lb_id=self.lb.get('id')),
body={'name': 'test_name_change'})
self.delete(self.l7rule_path.format(l7rule_id=l7rule.get('id')),
status=409)
def test_create_when_lb_pending_delete(self):
self.create_l7rule(
self.lb.get('id'), self.listener.get('id'),
self.l7policy.get('id'), constants.L7RULE_TYPE_PATH,
constants.L7RULE_COMPARE_TYPE_STARTS_WITH, '/api')
self.set_lb_status(self.lb.get('id'))
self.delete(self.LB_DELETE_CASCADE_PATH.format(
lb_id=self.lb.get('id')))
new_l7rule = {'type': constants.L7RULE_TYPE_HEADER,
'compare_type':
constants.L7RULE_COMPARE_TYPE_STARTS_WITH,
'value': 'some-string',
'key': 'Some-header'}
self.post(self.l7rules_path, body=new_l7rule, status=409)
def test_update_when_lb_pending_delete(self):
l7rule = self.create_l7rule(
self.lb.get('id'), self.listener.get('id'),
self.l7policy.get('id'), constants.L7RULE_TYPE_PATH,
constants.L7RULE_COMPARE_TYPE_STARTS_WITH, '/api')
self.set_lb_status(self.lb.get('id'))
self.delete(self.LB_DELETE_CASCADE_PATH.format(
lb_id=self.lb.get('id')))
new_l7rule = {'type': constants.L7RULE_TYPE_COOKIE,
'compare_type':
constants.L7RULE_COMPARE_TYPE_ENDS_WITH,
'value': 'some-string',
'key': 'some-cookie'}
self.put(self.l7rule_path.format(l7rule_id=l7rule.get('id')),
body=new_l7rule, status=409)
def test_delete_when_lb_pending_delete(self):
l7rule = self.create_l7rule(
self.lb.get('id'), self.listener.get('id'),
self.l7policy.get('id'), constants.L7RULE_TYPE_PATH,
constants.L7RULE_COMPARE_TYPE_STARTS_WITH, '/api')
self.set_lb_status(self.lb.get('id'))
self.delete(self.LB_DELETE_CASCADE_PATH.format(
lb_id=self.lb.get('id')))
self.delete(self.l7rule_path.format(l7rule_id=l7rule.get('id')),
status=409)

View File

@ -1,460 +0,0 @@
# Copyright 2014 Rackspace
# Copyright 2016 Blue Box, an IBM Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_utils import uuidutils
from octavia.common import constants
from octavia.tests.functional.api.v1 import base
class TestListener(base.BaseAPITest):
def setUp(self):
super(TestListener, self).setUp()
self.lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()})
self.set_lb_status(self.lb.get('id'))
self.listeners_path = self.LISTENERS_PATH.format(
lb_id=self.lb.get('id'))
self.pool = self.create_pool_sans_listener(
self.lb.get('id'), constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.set_lb_status(self.lb.get('id'))
def test_get_all(self):
listener1 = self.create_listener(self.lb.get('id'),
constants.PROTOCOL_HTTP, 80)
self.set_lb_status(self.lb.get('id'))
listener2 = self.create_listener(self.lb.get('id'),
constants.PROTOCOL_HTTP, 81)
self.set_lb_status(self.lb.get('id'))
listener3 = self.create_listener(self.lb.get('id'),
constants.PROTOCOL_HTTP, 82)
self.set_lb_status(self.lb.get('id'))
response = self.get(self.listeners_path)
api_listeners = response.json
self.assertEqual(3, len(api_listeners))
listener1['provisioning_status'] = constants.ACTIVE
listener1['operating_status'] = constants.ONLINE
listener2['provisioning_status'] = constants.ACTIVE
listener2['operating_status'] = constants.ONLINE
listener3['provisioning_status'] = constants.ACTIVE
listener3['operating_status'] = constants.ONLINE
for listener in api_listeners:
del listener['updated_at']
self.assertIsNone(listener1.pop('updated_at'))
self.assertIsNone(listener2.pop('updated_at'))
self.assertIsNone(listener3.pop('updated_at'))
self.assertIn(listener1, api_listeners)
self.assertIn(listener2, api_listeners)
self.assertIn(listener3, api_listeners)
def test_get_all_bad_lb_id(self):
path = self.LISTENERS_PATH.format(lb_id='SEAN-CONNERY')
self.get(path, status=404)
def test_get(self):
listener = self.create_listener(self.lb.get('id'),
constants.PROTOCOL_HTTP, 80)
listener_path = self.LISTENER_PATH.format(
lb_id=self.lb.get('id'), listener_id=listener.get('id'))
response = self.get(listener_path)
api_lb = response.json
expected = {'name': None, 'description': None, 'enabled': True,
'operating_status': constants.OFFLINE,
'provisioning_status': constants.PENDING_CREATE,
'connection_limit': None}
listener.update(expected)
self.assertEqual(listener, api_lb)
def test_get_bad_listener_id(self):
listener_path = self.LISTENER_PATH.format(lb_id=self.lb.get('id'),
listener_id='SEAN-CONNERY')
self.get(listener_path, status=404)
def test_create(self, **optionals):
sni1 = uuidutils.generate_uuid()
sni2 = uuidutils.generate_uuid()
lb_listener = {'name': 'listener1', 'default_pool_id': None,
'description': 'desc1',
'enabled': False, 'protocol': constants.PROTOCOL_HTTP,
'protocol_port': 80, 'connection_limit': 10,
'tls_certificate_id': uuidutils.generate_uuid(),
'sni_containers': [sni1, sni2],
'insert_headers': {},
'project_id': uuidutils.generate_uuid()}
lb_listener.update(optionals)
response = self.post(self.listeners_path, lb_listener)
listener_api = response.json
extra_expects = {'provisioning_status': constants.PENDING_CREATE,
'operating_status': constants.OFFLINE}
lb_listener.update(extra_expects)
self.assertTrue(uuidutils.is_uuid_like(listener_api.get('id')))
for key, value in optionals.items():
self.assertEqual(value, lb_listener.get(key))
lb_listener['id'] = listener_api.get('id')
lb_listener.pop('sni_containers')
sni_ex = [sni1, sni2]
sni_resp = listener_api.pop('sni_containers')
self.assertEqual(2, len(sni_resp))
for sni in sni_resp:
self.assertIn(sni, sni_ex)
self.assertIsNotNone(listener_api.pop('created_at'))
self.assertIsNone(listener_api.pop('updated_at'))
self.assertEqual(listener_api['project_id'],
listener_api.pop('tenant_id'))
lb_listener['project_id'] = self.project_id
self.assertEqual(lb_listener, listener_api)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_final_lb_statuses(self.lb.get('id'))
self.assert_final_listener_statuses(self.lb.get('id'),
listener_api.get('id'))
def test_create_with_default_pool_id(self):
lb_listener = {'name': 'listener1',
'default_pool_id': self.pool.get('id'),
'description': 'desc1',
'enabled': False, 'protocol': constants.PROTOCOL_HTTP,
'protocol_port': 80,
'project_id': self.project_id}
response = self.post(self.listeners_path, lb_listener)
api_listener = response.json
self.assertEqual(api_listener.get('default_pool_id'),
self.pool.get('id'))
def test_create_with_bad_default_pool_id(self):
lb_listener = {'name': 'listener1',
'default_pool_id': uuidutils.generate_uuid(),
'description': 'desc1',
'enabled': False, 'protocol': constants.PROTOCOL_HTTP,
'protocol_port': 80,
'project_id': self.project_id}
self.post(self.listeners_path, lb_listener, status=404)
def test_create_with_id(self):
self.test_create(id=uuidutils.generate_uuid())
def test_create_with_shared_default_pool_id(self):
lb_listener1 = {'name': 'listener1',
'default_pool_id': self.pool.get('id'),
'description': 'desc1',
'enabled': False, 'protocol': constants.PROTOCOL_HTTP,
'protocol_port': 80,
'project_id': self.project_id}
lb_listener2 = {'name': 'listener2',
'default_pool_id': self.pool.get('id'),
'description': 'desc2',
'enabled': False, 'protocol': constants.PROTOCOL_HTTP,
'protocol_port': 81,
'project_id': self.project_id}
listener1 = self.post(self.listeners_path, lb_listener1).json
self.set_lb_status(self.lb.get('id'), constants.ACTIVE)
listener2 = self.post(self.listeners_path, lb_listener2).json
self.assertEqual(listener1['default_pool_id'], self.pool.get('id'))
self.assertEqual(listener1['default_pool_id'],
listener2['default_pool_id'])
def test_create_with_project_id(self):
self.test_create(project_id=uuidutils.generate_uuid())
def test_create_with_duplicate_id(self):
listener = self.create_listener(self.lb.get('id'),
constants.PROTOCOL_HTTP,
protocol_port=80)
self.set_lb_status(self.lb.get('id'), constants.ACTIVE)
path = self.LISTENERS_PATH.format(lb_id=self.lb.get('id'))
body = {'id': listener.get('id'), 'protocol': constants.PROTOCOL_HTTP,
'protocol_port': 81}
self.post(path, body, status=409, expect_errors=True)
def test_create_defaults(self):
defaults = {'name': None, 'default_pool_id': None,
'description': None, 'enabled': True,
'connection_limit': None, 'tls_certificate_id': None,
'sni_containers': [], 'insert_headers': {}}
lb_listener = {'protocol': constants.PROTOCOL_HTTP,
'protocol_port': 80,
'project_id': self.project_id}
response = self.post(self.listeners_path, lb_listener)
listener_api = response.json
extra_expects = {'provisioning_status': constants.PENDING_CREATE,
'operating_status': constants.OFFLINE}
lb_listener.update(extra_expects)
lb_listener.update(defaults)
self.assertTrue(uuidutils.is_uuid_like(listener_api.get('id')))
lb_listener['id'] = listener_api.get('id')
self.assertIsNotNone(listener_api.pop('created_at'))
self.assertIsNone(listener_api.pop('updated_at'))
self.assertEqual(listener_api['project_id'],
listener_api.pop('tenant_id'))
self.assertEqual(lb_listener, listener_api)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_final_lb_statuses(self.lb.get('id'))
self.assert_final_listener_statuses(self.lb.get('id'),
listener_api.get('id'))
def test_create_over_quota(self):
lb_listener = {'protocol': constants.PROTOCOL_HTTP,
'protocol_port': 80,
'project_id': self.project_id}
self.check_quota_met_true_mock.start()
self.addCleanup(self.check_quota_met_true_mock.stop)
self.post(self.listeners_path, lb_listener, status=403)
def test_update(self):
tls_uuid = uuidutils.generate_uuid()
listener = self.create_listener(self.lb.get('id'),
constants.PROTOCOL_TCP, 80,
name='listener1', description='desc1',
enabled=False, connection_limit=10,
tls_certificate_id=tls_uuid,
default_pool_id=None)
self.set_lb_status(self.lb.get('id'))
new_listener = {'name': 'listener2', 'enabled': True,
'default_pool_id': self.pool.get('id')}
listener_path = self.LISTENER_PATH.format(
lb_id=self.lb.get('id'), listener_id=listener.get('id'))
api_listener = self.put(listener_path, new_listener).json
update_expect = {'name': 'listener2', 'enabled': True,
'default_pool_id': self.pool.get('id'),
'provisioning_status': constants.PENDING_UPDATE,
'operating_status': constants.ONLINE}
listener.update(update_expect)
self.assertEqual(listener.pop('created_at'),
api_listener.pop('created_at'))
self.assertNotEqual(listener.pop('updated_at'),
api_listener.pop('updated_at'))
self.assertNotEqual(listener, api_listener)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_final_listener_statuses(self.lb.get('id'),
api_listener.get('id'))
def test_update_bad_listener_id(self):
listener_path = self.LISTENER_PATH.format(lb_id=self.lb.get('id'),
listener_id='SEAN-CONNERY')
self.put(listener_path, body={}, status=404)
def test_update_with_bad_default_pool_id(self):
bad_pool_uuid = uuidutils.generate_uuid()
listener = self.create_listener(self.lb.get('id'),
constants.PROTOCOL_TCP, 80,
name='listener1', description='desc1',
enabled=False, connection_limit=10,
default_pool_id=self.pool.get('id'))
self.set_lb_status(self.lb.get('id'))
new_listener = {'name': 'listener2', 'enabled': True,
'default_pool_id': bad_pool_uuid}
listener_path = self.LISTENER_PATH.format(
lb_id=self.lb.get('id'), listener_id=listener.get('id'))
self.put(listener_path, new_listener, status=404)
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_final_listener_statuses(self.lb.get('id'),
listener.get('id'))
def test_create_listeners_same_port(self):
listener1 = self.create_listener(self.lb.get('id'),
constants.PROTOCOL_TCP, 80)
self.set_lb_status(self.lb.get('id'))
listener2_post = {'protocol': listener1.get('protocol'),
'protocol_port': listener1.get('protocol_port'),
'project_id': self.project_id}
self.post(self.listeners_path, listener2_post, status=409)
def test_update_listeners_same_port(self):
self.skip('This test should pass with a validation layer.')
listener1 = self.create_listener(self.lb.get('id'),
constants.PROTOCOL_TCP, 80)
self.set_lb_status(self.lb.get('id'))
listener2 = self.create_listener(self.lb.get('id'),
constants.PROTOCOL_TCP, 81)
self.set_lb_status(self.lb.get('id'))
listener2_put = {'protocol': listener1.get('protocol'),
'protocol_port': listener1.get('protocol_port')}
listener2_path = self.LISTENER_PATH.format(
lb_id=self.lb.get('id'), listener_id=listener2.get('id'))
self.put(listener2_path, listener2_put, status=409)
def test_delete(self):
listener = self.create_listener(self.lb.get('id'),
constants.PROTOCOL_HTTP, 80)
self.set_lb_status(self.lb.get('id'))
listener_path = self.LISTENER_PATH.format(
lb_id=self.lb.get('id'), listener_id=listener.get('id'))
self.delete(listener_path)
response = self.get(listener_path)
api_listener = response.json
expected = {'name': None, 'default_pool_id': None,
'description': None, 'enabled': True,
'operating_status': constants.ONLINE,
'provisioning_status': constants.PENDING_DELETE,
'connection_limit': None}
listener.update(expected)
self.assertIsNone(listener.pop('updated_at'))
self.assertIsNotNone(api_listener.pop('updated_at'))
self.assertEqual(listener, api_listener)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_final_lb_statuses(self.lb.get('id'))
self.assert_final_listener_statuses(self.lb.get('id'),
api_listener.get('id'),
delete=True)
def test_delete_bad_listener_id(self):
listener_path = self.LISTENER_PATH.format(lb_id=self.lb.get('id'),
listener_id='SEAN-CONNERY')
self.delete(listener_path, status=404)
def test_create_listener_bad_protocol(self):
lb_listener = {'protocol': 'SEAN_CONNERY',
'protocol_port': 80}
self.post(self.listeners_path, lb_listener, status=400)
def test_update_listener_bad_protocol(self):
self.skip('This test should pass after a validation layer.')
listener = self.create_listener(self.lb.get('id'),
constants.PROTOCOL_TCP, 80)
self.set_lb_status(self.lb.get('id'))
new_listener = {'protocol': 'SEAN_CONNERY',
'protocol_port': 80}
listener_path = self.LISTENER_PATH.format(
lb_id=self.lb.get('id'), listener_id=listener.get('id'))
self.put(listener_path, new_listener, status=400)
def test_update_pending_create(self):
lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()},
name='lb1', description='desc1', enabled=False)
lb_listener = {'name': 'listener1', 'description': 'desc1',
'enabled': False, 'protocol': constants.PROTOCOL_HTTP,
'protocol_port': 80, 'connection_limit': 10,
'project_id': self.project_id}
self.post(self.LISTENERS_PATH.format(lb_id=lb.get('id')),
lb_listener, status=409)
def test_delete_pending_update(self):
lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()},
name='lb1', description='desc1', enabled=False)
self.set_lb_status(lb.get('id'))
lb_listener = {'name': 'listener1', 'description': 'desc1',
'enabled': False, 'protocol': constants.PROTOCOL_HTTP,
'protocol_port': 80, 'connection_limit': 10,
'project_id': self.project_id}
api_listener = self.post(
self.LISTENERS_PATH.format(lb_id=lb.get('id')), lb_listener).json
self.delete(self.LISTENER_PATH.format(
lb_id=lb.get('id'), listener_id=api_listener.get('id')),
status=409)
def test_update_pending_update(self):
lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()},
name='lb1', description='desc1', enabled=False)
self.set_lb_status(lb.get('id'))
lb_listener = {'name': 'listener1', 'description': 'desc1',
'enabled': False, 'protocol': constants.PROTOCOL_HTTP,
'protocol_port': 80, 'connection_limit': 10,
'project_id': self.project_id}
api_listener = self.post(
self.LISTENERS_PATH.format(lb_id=lb.get('id')), lb_listener).json
self.set_lb_status(lb.get('id'))
self.put(self.LB_PATH.format(lb_id=lb.get('id')), {'name': 'hi'})
self.put(self.LISTENER_PATH.format(
lb_id=lb.get('id'), listener_id=api_listener.get('id')),
{}, status=409)
def test_update_pending_delete(self):
lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()},
name='lb1', description='desc1', enabled=False)
self.set_lb_status(lb.get('id'))
lb_listener = {'name': 'listener1', 'description': 'desc1',
'enabled': False, 'protocol': constants.PROTOCOL_HTTP,
'protocol_port': 80, 'connection_limit': 10,
'project_id': self.project_id}
api_listener = self.post(
self.LISTENERS_PATH.format(lb_id=lb.get('id')), lb_listener).json
self.set_lb_status(lb.get('id'))
self.delete(self.LB_DELETE_CASCADE_PATH.format(lb_id=lb.get('id')))
self.put(self.LISTENER_PATH.format(
lb_id=lb.get('id'), listener_id=api_listener.get('id')),
{}, status=409)
def test_delete_pending_delete(self):
lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()},
name='lb1', description='desc1', enabled=False)
self.set_lb_status(lb.get('id'))
lb_listener = {'name': 'listener1', 'description': 'desc1',
'enabled': False, 'protocol': constants.PROTOCOL_HTTP,
'protocol_port': 80, 'connection_limit': 10,
'project_id': self.project_id}
api_listener = self.post(
self.LISTENERS_PATH.format(lb_id=lb.get('id')), lb_listener).json
self.set_lb_status(lb.get('id'))
self.delete(self.LB_DELETE_CASCADE_PATH.format(lb_id=lb.get('id')))
self.delete(self.LISTENER_PATH.format(
lb_id=lb.get('id'), listener_id=api_listener.get('id')),
status=409)
def test_create_with_tls_termination_data(self):
tls = {'certificate': 'blah', 'intermediate_certificate': 'blah',
'private_key': 'blah', 'passphrase': 'blah'}
listener = self.create_listener(self.lb.get('id'),
constants.PROTOCOL_HTTP, 80,
tls_termination=tls)
self.assertIsNone(listener.get('tls_termination'))
get_listener = self.get(self.LISTENER_PATH.format(
lb_id=self.lb.get('id'), listener_id=listener.get('id'))).json
self.assertIsNone(get_listener.get('tls_termination'))
def test_update_with_tls_termination_data(self):
tls = {'certificate': 'blah', 'intermediate_certificate': 'blah',
'private_key': 'blah', 'passphrase': 'blah'}
listener = self.create_listener(self.lb.get('id'),
constants.PROTOCOL_HTTP, 80)
self.set_lb_status(self.lb.get('id'))
listener_path = self.LISTENER_PATH.format(
lb_id=self.lb.get('id'), listener_id=listener.get('id'))
listener = self.put(listener_path, {'tls_termination': tls}).json
self.assertIsNone(listener.get('tls_termination'))
get_listener = self.get(listener_path).json
self.assertIsNone(get_listener.get('tls_termination'))
def test_create_with_valid_insert_headers(self):
lb_listener = {'protocol': 'HTTP',
'protocol_port': 80,
'insert_headers': {'X-Forwarded-For': 'true'},
'project_id': self.project_id}
self.post(self.listeners_path, lb_listener, status=202)
def test_create_with_bad_insert_headers(self):
lb_listener = {'protocol': 'HTTP',
'protocol_port': 80,
# 'insert_headers': {'x': 'x'}}
'insert_headers': {'X-Forwarded-Four': 'true'},
'project_id': self.project_id}
self.post(self.listeners_path, lb_listener, status=400)

View File

@ -1,51 +0,0 @@
# Copyright 2016 Blue Box, an IBM Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from octavia.common import constants
from octavia.tests.functional.api.v1 import base
from oslo_utils import uuidutils
class TestListenerStatistics(base.BaseAPITest):
FAKE_UUID_1 = uuidutils.generate_uuid()
def setUp(self):
super(TestListenerStatistics, self).setUp()
self.lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()})
self.set_lb_status(self.lb.get('id'))
self.listener = self.create_listener(self.lb.get('id'),
constants.PROTOCOL_HTTP, 80)
self.set_lb_status(self.lb.get('id'))
self.ls_path = self.LISTENER_STATS_PATH.format(
lb_id=self.lb.get('id'), listener_id=self.listener.get('id'))
self.amphora = self.create_amphora(uuidutils.generate_uuid(),
self.lb.get('id'))
def test_get(self):
ls = self.create_listener_stats(listener_id=self.listener.get('id'),
amphora_id=self.amphora.id)
expected = {
'listener': {
'bytes_in': ls['bytes_in'],
'bytes_out': ls['bytes_out'],
'active_connections': ls['active_connections'],
'total_connections': ls['total_connections'],
'request_errors': ls['request_errors']
}
}
response = self.get(self.ls_path)
response_body = response.json
self.assertEqual(expected, response_body)

View File

@ -1,939 +0,0 @@
# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import mock
from oslo_utils import uuidutils
from octavia.common import constants
from octavia.network import base as network_base
from octavia.network import data_models as network_models
from octavia.tests.functional.api.v1 import base
class TestLoadBalancer(base.BaseAPITest):
def _assert_request_matches_response(self, req, resp, **optionals):
self.assertTrue(uuidutils.is_uuid_like(resp.get('id')))
self.assertEqual(req.get('name'), resp.get('name'))
self.assertEqual(req.get('description'), resp.get('description'))
self.assertEqual(constants.PENDING_CREATE,
resp.get('provisioning_status'))
self.assertEqual(constants.OFFLINE, resp.get('operating_status'))
self.assertEqual(req.get('enabled', True), resp.get('enabled'))
self.assertIsNotNone(resp.get('created_at'))
self.assertIsNone(resp.get('updated_at'))
for key, value in optionals.items():
self.assertEqual(value, req.get(key))
self.assert_final_lb_statuses(resp.get('id'))
def test_empty_list(self):
response = self.get(self.LBS_PATH)
api_list = response.json
self.assertEqual([], api_list)
def test_create(self, **optionals):
lb_json = {'name': 'test1',
'vip': {'subnet_id': uuidutils.generate_uuid()},
'project_id': self.project_id}
lb_json.update(optionals)
response = self.post(self.LBS_PATH, lb_json)
api_lb = response.json
self._assert_request_matches_response(lb_json, api_lb)
def test_create_with_id(self):
self.test_create(id=uuidutils.generate_uuid())
def test_create_with_duplicate_id(self):
lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()})
self.post(self.LBS_PATH,
{'id': lb.get('id'),
'vip': {'subnet_id': uuidutils.generate_uuid()}},
status=409, expect_errors=True)
def test_create_with_project_id(self):
self.test_create(project_id=uuidutils.generate_uuid())
def test_create_over_quota(self):
lb_json = {'name': 'test1',
'vip': {'subnet_id': uuidutils.generate_uuid()},
'project_id': self.project_id}
self.check_quota_met_true_mock.start()
self.addCleanup(self.check_quota_met_true_mock.stop)
self.post(self.LBS_PATH, lb_json, status=403)
def test_create_without_vip(self):
lb_json = {}
response = self.post(self.LBS_PATH, lb_json, status=400)
err_msg = ("Invalid input for field/attribute vip. Value: 'None'. "
"Mandatory field missing.")
self.assertEqual(response.json.get('faultstring'), err_msg)
def test_create_with_empty_vip(self):
lb_json = {'vip': {},
'project_id': self.project_id}
response = self.post(self.LBS_PATH, lb_json, status=400)
err_msg = ('Validation failure: '
'VIP must contain one of: port_id, network_id, subnet_id.')
self.assertEqual(response.json.get('faultstring'), err_msg)
def test_create_with_invalid_vip_subnet(self):
subnet_id = uuidutils.generate_uuid()
lb_json = {'vip': {'subnet_id': subnet_id},
'project_id': self.project_id}
with mock.patch("octavia.network.drivers.noop_driver.driver"
".NoopManager.get_subnet") as mock_get_subnet:
mock_get_subnet.side_effect = network_base.SubnetNotFound
response = self.post(self.LBS_PATH, lb_json, status=400)
err_msg = 'Subnet {} not found.'.format(subnet_id)
self.assertEqual(response.json.get('faultstring'), err_msg)
def test_create_with_invalid_vip_network_subnet(self):
network = network_models.Network(id=uuidutils.generate_uuid(),
subnets=[])
subnet_id = uuidutils.generate_uuid()
lb_json = {
'vip': {
'subnet_id': subnet_id,
'network_id': network.id
},
'project_id': self.project_id}
with mock.patch("octavia.network.drivers.noop_driver.driver"
".NoopManager.get_network") as mock_get_network:
mock_get_network.return_value = network
response = self.post(self.LBS_PATH, lb_json, status=400)
err_msg = 'Subnet {} not found.'.format(subnet_id)
self.assertEqual(response.json.get('faultstring'), err_msg)
def test_create_with_vip_subnet_fills_network(self):
subnet = network_models.Subnet(id=uuidutils.generate_uuid(),
network_id=uuidutils.generate_uuid())
vip = {'subnet_id': subnet.id}
lb_json = {'vip': vip,
'project_id': self.project_id}
with mock.patch("octavia.network.drivers.noop_driver.driver"
".NoopManager.get_subnet") as mock_get_subnet:
mock_get_subnet.return_value = subnet
response = self.post(self.LBS_PATH, lb_json)
api_lb = response.json
self._assert_request_matches_response(lb_json, api_lb)
self.assertEqual(subnet.id,
api_lb.get('vip', {}).get('subnet_id'))
self.assertEqual(subnet.network_id,
api_lb.get('vip', {}).get('network_id'))
def test_create_with_vip_network_has_no_subnet(self):
network = network_models.Network(id=uuidutils.generate_uuid(),
subnets=[])
lb_json = {
'vip': {'network_id': network.id},
'project_id': self.project_id}
with mock.patch("octavia.network.drivers.noop_driver.driver"
".NoopManager.get_network") as mock_get_network:
mock_get_network.return_value = network
response = self.post(self.LBS_PATH, lb_json, status=400)
err_msg = ("Validation failure: "
"Supplied network does not contain a subnet.")
self.assertEqual(response.json.get('faultstring'), err_msg)
def test_create_with_vip_network_picks_subnet_ipv4(self):
network_id = uuidutils.generate_uuid()
subnet1 = network_models.Subnet(id=uuidutils.generate_uuid(),
network_id=network_id,
ip_version=6)
subnet2 = network_models.Subnet(id=uuidutils.generate_uuid(),
network_id=network_id,
ip_version=4)
network = network_models.Network(id=network_id,
subnets=[subnet1.id, subnet2.id])
vip = {'network_id': network.id}
lb_json = {'vip': vip,
'project_id': self.project_id}
with mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager"
".get_network") as mock_get_network, mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager"
".get_subnet") as mock_get_subnet:
mock_get_network.return_value = network
mock_get_subnet.side_effect = [subnet1, subnet2]
response = self.post(self.LBS_PATH, lb_json)
api_lb = response.json
self._assert_request_matches_response(lb_json, api_lb)
self.assertEqual(subnet2.id,
api_lb.get('vip', {}).get('subnet_id'))
self.assertEqual(network_id,
api_lb.get('vip', {}).get('network_id'))
def test_create_with_vip_network_picks_subnet_ipv6(self):
network_id = uuidutils.generate_uuid()
subnet = network_models.Subnet(id=uuidutils.generate_uuid(),
network_id=network_id,
ip_version=6)
network = network_models.Network(id=network_id,
subnets=[subnet.id])
vip = {'network_id': network.id}
lb_json = {'vip': vip,
'project_id': self.project_id}
with mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager"
".get_network") as mock_get_network, mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager"
".get_subnet") as mock_get_subnet:
mock_get_network.return_value = network
mock_get_subnet.return_value = subnet
response = self.post(self.LBS_PATH, lb_json)
api_lb = response.json
self._assert_request_matches_response(lb_json, api_lb)
self.assertEqual(subnet.id,
api_lb.get('vip', {}).get('subnet_id'))
self.assertEqual(network_id,
api_lb.get('vip', {}).get('network_id'))
def test_create_with_vip_full(self):
subnet = network_models.Subnet(id=uuidutils.generate_uuid())
network = network_models.Network(id=uuidutils.generate_uuid(),
subnets=[subnet])
port = network_models.Port(id=uuidutils.generate_uuid(),
network_id=network.id)
vip = {'ip_address': '10.0.0.1',
'subnet_id': subnet.id,
'network_id': network.id,
'port_id': port.id}
lb_json = {'name': 'test1', 'description': 'test1_desc',
'vip': vip, 'enabled': False,
'project_id': self.project_id}
with mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager"
".get_network") as mock_get_network, mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager"
".get_port") as mock_get_port:
mock_get_network.return_value = network
mock_get_port.return_value = port
response = self.post(self.LBS_PATH, lb_json)
api_lb = response.json
self._assert_request_matches_response(lb_json, api_lb)
self.assertEqual(vip, api_lb.get('vip'))
def test_create_with_long_name(self):
lb_json = {'name': 'n' * 256, 'vip': {}}
self.post(self.LBS_PATH, lb_json, status=400)
def test_create_with_long_description(self):
lb_json = {'description': 'n' * 256, 'vip': {}}
self.post(self.LBS_PATH, lb_json, status=400)
def test_create_with_nonuuid_vip_attributes(self):
lb_json = {'vip': {'subnet_id': 'HI'}}
self.post(self.LBS_PATH, lb_json, status=400)
def test_get_all(self):
lb1 = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()}, name='lb1')
lb2 = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()}, name='lb2')
lb3 = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()}, name='lb3')
response = self.get(self.LBS_PATH,
params={'project_id': self.project_id})
lbs = response.json
lb_id_names = [(lb.get('id'), lb.get('name')) for lb in lbs]
self.assertEqual(3, len(lbs))
self.assertIn((lb1.get('id'), lb1.get('name')), lb_id_names)
self.assertIn((lb2.get('id'), lb2.get('name')), lb_id_names)
self.assertIn((lb3.get('id'), lb3.get('name')), lb_id_names)
def test_get_all_by_project_id(self):
project1_id = uuidutils.generate_uuid()
project2_id = uuidutils.generate_uuid()
lb1 = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()},
name='lb1', project_id=project1_id)
lb2 = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()},
name='lb2', project_id=project1_id)
lb3 = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()},
name='lb3', project_id=project2_id)
project1_path = "{0}?project_id={1}".format(self.LBS_PATH, project1_id)
response = self.get(project1_path)
lbs = response.json
lb_id_names = [(lb.get('id'), lb.get('name')) for lb in lbs]
self.assertEqual(2, len(lbs))
self.assertIn((lb1.get('id'), lb1.get('name')), lb_id_names)
self.assertIn((lb2.get('id'), lb2.get('name')), lb_id_names)
project2_path = "{0}?project_id={1}".format(self.LBS_PATH, project2_id)
response = self.get(project2_path)
lbs = response.json
lb_id_names = [(lb.get('id'), lb.get('name')) for lb in lbs]
self.assertEqual(1, len(lbs))
self.assertIn((lb3.get('id'), lb3.get('name')), lb_id_names)
def test_get(self):
subnet = network_models.Subnet(id=uuidutils.generate_uuid())
network = network_models.Network(id=uuidutils.generate_uuid(),
subnets=[subnet])
port = network_models.Port(id=uuidutils.generate_uuid(),
network_id=network.id)
vip = {'ip_address': '10.0.0.1',
'subnet_id': subnet.id,
'network_id': network.id,
'port_id': port.id}
with mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager"
".get_network") as mock_get_network, mock.patch(
"octavia.network.drivers.noop_driver.driver.NoopManager."
"get_port") as mock_get_port:
mock_get_network.return_value = network
mock_get_port.return_value = port
lb = self.create_load_balancer(vip, name='lb1',
description='test1_desc',
enabled=False)
response = self.get(self.LB_PATH.format(lb_id=lb.get('id')))
self.assertEqual('lb1', response.json.get('name'))
self.assertEqual('test1_desc', response.json.get('description'))
self.assertFalse(response.json.get('enabled'))
self.assertEqual(vip, response.json.get('vip'))
def test_get_bad_lb_id(self):
path = self.LB_PATH.format(lb_id='SEAN-CONNERY')
self.get(path, status=404)
def test_update(self):
lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()},
name='lb1', description='desc1', enabled=False)
lb_json = {'name': 'lb2'}
lb = self.set_lb_status(lb.get('id'))
response = self.put(self.LB_PATH.format(lb_id=lb.get('id')), lb_json)
api_lb = response.json
r_vip = api_lb.get('vip')
self.assertIsNone(r_vip.get('network_id'))
self.assertEqual('lb1', api_lb.get('name'))
self.assertEqual('desc1', api_lb.get('description'))
self.assertFalse(api_lb.get('enabled'))
self.assertEqual(constants.PENDING_UPDATE,
api_lb.get('provisioning_status'))
self.assertEqual(lb.get('operational_status'),
api_lb.get('operational_status'))
self.assertIsNotNone(api_lb.get('created_at'))
self.assertIsNotNone(api_lb.get('updated_at'))
self.assert_final_lb_statuses(api_lb.get('id'))
def test_update_with_vip(self):
lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()},
name='lb1', description='desc1', enabled=False)
lb_json = {'vip': {'subnet_id': '1234'}}
lb = self.set_lb_status(lb.get('id'))
self.put(self.LB_PATH.format(lb_id=lb.get('id')), lb_json, status=400)
def test_update_bad_lb_id(self):
path = self.LB_PATH.format(lb_id='SEAN-CONNERY')
self.put(path, body={}, status=404)
def test_update_pending_create(self):
lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()},
name='lb1', description='desc1', enabled=False)
lb_json = {'name': 'Roberto'}
self.put(self.LB_PATH.format(lb_id=lb.get('id')), lb_json, status=409)
def test_delete_pending_create(self):
lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()},
name='lb1', description='desc1', enabled=False)
self.delete(self.LB_PATH.format(lb_id=lb.get('id')), status=409)
def test_update_pending_update(self):
lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()},
name='lb1', description='desc1', enabled=False)
lb_json = {'name': 'Bob'}
lb = self.set_lb_status(lb.get('id'))
self.put(self.LB_PATH.format(lb_id=lb.get('id')), lb_json)
self.put(self.LB_PATH.format(lb_id=lb.get('id')), lb_json, status=409)
def test_delete_pending_update(self):
lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()},
name='lb1', description='desc1', enabled=False)
lb_json = {'name': 'Steve'}
lb = self.set_lb_status(lb.get('id'))
self.put(self.LB_PATH.format(lb_id=lb.get('id')), lb_json)
self.delete(self.LB_PATH.format(lb_id=lb.get('id')), status=409)
def test_delete_with_error_status(self):
lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()},
name='lb1', description='desc1', enabled=False)
lb = self.set_lb_status(lb.get('id'), status=constants.ERROR)
self.delete(self.LB_PATH.format(lb_id=lb.get('id')), status=202)
def test_update_pending_delete(self):
lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()},
name='lb1', description='desc1', enabled=False)
lb = self.set_lb_status(lb.get('id'))
self.delete(self.LB_PATH.format(lb_id=lb.get('id')))
lb_json = {'name': 'John'}
self.put(self.LB_PATH.format(lb_id=lb.get('id')), lb_json, status=409)
def test_delete_pending_delete(self):
lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()},
name='lb1', description='desc1', enabled=False)
lb = self.set_lb_status(lb.get('id'))
self.delete(self.LB_PATH.format(lb_id=lb.get('id')))
self.delete(self.LB_PATH.format(lb_id=lb.get('id')), status=409)
def test_delete(self):
lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()},
name='lb1', description='desc1', enabled=False)
lb = self.set_lb_status(lb.get('id'))
self.delete(self.LB_PATH.format(lb_id=lb.get('id')))
response = self.get(self.LB_PATH.format(lb_id=lb.get('id')))
api_lb = response.json
self.assertEqual('lb1', api_lb.get('name'))
self.assertEqual('desc1', api_lb.get('description'))
self.assertFalse(api_lb.get('enabled'))
self.assertEqual(constants.PENDING_DELETE,
api_lb.get('provisioning_status'))
self.assertEqual(lb.get('operational_status'),
api_lb.get('operational_status'))
self.assert_final_lb_statuses(api_lb.get('id'), delete=True)
def test_delete_with_listener(self):
lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()},
name='lb1', description='desc1', enabled=False)
lb = self.set_lb_status(lb.get('id'))
self.create_listener(
lb_id=lb.get('id'),
protocol=constants.PROTOCOL_HTTP,
protocol_port=80
)
lb = self.set_lb_status(lb.get('id'))
self.delete(self.LB_PATH.format(lb_id=lb.get('id')), status=400)
def test_delete_with_pool(self):
lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()},
name='lb1', description='desc1', enabled=False)
lb = self.set_lb_status(lb.get('id'))
self.create_pool_sans_listener(
lb_id=lb.get('id'),
protocol=constants.PROTOCOL_HTTP,
lb_algorithm=constants.LB_ALGORITHM_ROUND_ROBIN
)
lb = self.set_lb_status(lb.get('id'))
self.delete(self.LB_PATH.format(lb_id=lb.get('id')), status=400)
def test_delete_bad_lb_id(self):
path = self.LB_PATH.format(lb_id='bad_uuid')
self.delete(path, status=404)
class TestLoadBalancerGraph(base.BaseAPITest):
def setUp(self):
super(TestLoadBalancerGraph, self).setUp()
self._project_id = uuidutils.generate_uuid()
def _assert_graphs_equal(self, expected_graph, observed_graph):
observed_graph_copy = copy.deepcopy(observed_graph)
del observed_graph_copy['created_at']
del observed_graph_copy['updated_at']
self.assertEqual(observed_graph_copy['project_id'],
observed_graph_copy.pop('tenant_id'))
obs_lb_id = observed_graph_copy.pop('id')
self.assertTrue(uuidutils.is_uuid_like(obs_lb_id))
expected_listeners = expected_graph.pop('listeners', [])
observed_listeners = observed_graph_copy.pop('listeners', [])
self.assertEqual(expected_graph, observed_graph_copy)
for observed_listener in observed_listeners:
del observed_listener['created_at']
del observed_listener['updated_at']
self.assertEqual(observed_listener['project_id'],
observed_listener.pop('tenant_id'))
self.assertTrue(uuidutils.is_uuid_like(
observed_listener.pop('id')))
default_pool = observed_listener.get('default_pool')
if default_pool:
observed_listener.pop('default_pool_id')
self.assertTrue(default_pool.get('id'))
default_pool.pop('id')
default_pool.pop('created_at')
default_pool.pop('updated_at')
self.assertEqual(default_pool['project_id'],
default_pool.pop('tenant_id'))
hm = default_pool.get('health_monitor')
if hm:
self.assertEqual(hm['project_id'],
hm.pop('tenant_id'))
for member in default_pool.get('members', []):
self.assertTrue(member.get('id'))
member.pop('id')
member.pop('created_at')
member.pop('updated_at')
self.assertEqual(member['project_id'],
member.pop('tenant_id'))
if observed_listener.get('sni_containers'):
observed_listener['sni_containers'].sort()
o_l7policies = observed_listener.get('l7policies')
if o_l7policies:
for o_l7policy in o_l7policies:
if o_l7policy.get('redirect_pool'):
r_pool = o_l7policy.get('redirect_pool')
self.assertTrue(r_pool.get('id'))
r_pool.pop('id')
r_pool.pop('created_at')
r_pool.pop('updated_at')
self.assertEqual(r_pool['project_id'],
r_pool.pop('tenant_id'))
self.assertTrue(o_l7policy.get('redirect_pool_id'))
o_l7policy.pop('redirect_pool_id')
if r_pool.get('members'):
for r_member in r_pool.get('members'):
self.assertTrue(r_member.get('id'))
r_member.pop('id')
r_member.pop('created_at')
r_member.pop('updated_at')
self.assertEqual(r_member['project_id'],
r_member.pop('tenant_id'))
self.assertTrue(o_l7policy.get('id'))
o_l7policy.pop('id')
l7rules = o_l7policy.get('l7rules')
for l7rule in l7rules:
self.assertTrue(l7rule.get('id'))
l7rule.pop('id')
self.assertIn(observed_listener, expected_listeners)
def _get_lb_bodies(self, create_listeners, expected_listeners):
subnet_id = uuidutils.generate_uuid()
create_lb = {
'name': 'lb1',
'project_id': self._project_id,
'vip': {'subnet_id': subnet_id},
'listeners': create_listeners
}
expected_lb = {
'description': None,
'enabled': True,
'provisioning_status': constants.PENDING_CREATE,
'operating_status': constants.OFFLINE,
}
expected_lb.update(create_lb)
expected_lb['listeners'] = expected_listeners
expected_lb['vip'] = {'ip_address': None, 'port_id': None,
'subnet_id': subnet_id, 'network_id': None}
return create_lb, expected_lb
def _get_listener_bodies(self, name='listener1', protocol_port=80,
create_default_pool=None,
expected_default_pool=None,
create_l7policies=None,
expected_l7policies=None,
create_sni_containers=None,
expected_sni_containers=None):
create_listener = {
'name': name,
'protocol_port': protocol_port,
'protocol': constants.PROTOCOL_HTTP,
'project_id': self._project_id
}
expected_listener = {
'description': None,
'tls_certificate_id': None,
'sni_containers': [],
'connection_limit': None,
'enabled': True,
'provisioning_status': constants.PENDING_CREATE,
'operating_status': constants.OFFLINE,
'insert_headers': {}
}
if create_sni_containers:
create_listener['sni_containers'] = create_sni_containers
expected_listener.update(create_listener)
if create_default_pool:
pool = create_default_pool
create_listener['default_pool'] = pool
if pool.get('id'):
create_listener['default_pool_id'] = pool['id']
if create_l7policies:
l7policies = create_l7policies
create_listener['l7policies'] = l7policies
if expected_default_pool:
expected_listener['default_pool'] = expected_default_pool
if expected_sni_containers:
expected_listener['sni_containers'] = expected_sni_containers
if expected_l7policies:
expected_listener['l7policies'] = expected_l7policies
return create_listener, expected_listener
def _get_pool_bodies(self, name='pool1', create_members=None,
expected_members=None, create_hm=None,
expected_hm=None, protocol=constants.PROTOCOL_HTTP,
session_persistence=True):
create_pool = {
'name': name,
'protocol': protocol,
'lb_algorithm': constants.LB_ALGORITHM_ROUND_ROBIN,
'project_id': self._project_id
}
if session_persistence:
create_pool['session_persistence'] = {
'type': constants.SESSION_PERSISTENCE_SOURCE_IP,
'cookie_name': None}
if create_members:
create_pool['members'] = create_members
if create_hm:
create_pool['health_monitor'] = create_hm
expected_pool = {
'description': None,
'session_persistence': None,
'members': [],
'enabled': True,
'operating_status': constants.OFFLINE
}
expected_pool.update(create_pool)
if expected_members:
expected_pool['members'] = expected_members
if expected_hm:
expected_pool['health_monitor'] = expected_hm
return create_pool, expected_pool
def _get_member_bodies(self, protocol_port=80):
create_member = {
'ip_address': '10.0.0.1',
'protocol_port': protocol_port,
'project_id': self._project_id
}
expected_member = {
'weight': 1,
'enabled': True,
'subnet_id': None,
'operating_status': constants.NO_MONITOR,
'monitor_address': None,
'monitor_port': None
}
expected_member.update(create_member)
return create_member, expected_member
def _get_hm_bodies(self):
create_hm = {
'type': constants.HEALTH_MONITOR_PING,
'delay': 1,
'timeout': 1,
'fall_threshold': 1,
'rise_threshold': 1,
'project_id': self._project_id
}
expected_hm = {
'http_method': 'GET',
'url_path': '/',
'expected_codes': '200',
'enabled': True
}
expected_hm.update(create_hm)
return create_hm, expected_hm
def _get_sni_container_bodies(self):
create_sni_container1 = uuidutils.generate_uuid()
create_sni_container2 = uuidutils.generate_uuid()
create_sni_containers = [create_sni_container1, create_sni_container2]
expected_sni_containers = [create_sni_container1,
create_sni_container2]
expected_sni_containers.sort()
return create_sni_containers, expected_sni_containers
def _get_l7policies_bodies(self, create_pool=None, expected_pool=None,
create_l7rules=None, expected_l7rules=None):
create_l7policies = []
if create_pool:
create_l7policy = {
'action': constants.L7POLICY_ACTION_REDIRECT_TO_POOL,
'redirect_pool': create_pool,
'position': 1,
'enabled': False
}
else:
create_l7policy = {
'action': constants.L7POLICY_ACTION_REDIRECT_TO_URL,
'redirect_url': 'http://127.0.0.1/',
'position': 1,
'enabled': False
}
create_l7policies.append(create_l7policy)
expected_l7policy = {
'name': None,
'description': None,
'redirect_url': None,
'redirect_prefix': None,
'l7rules': []
}
expected_l7policy.update(create_l7policy)
expected_l7policies = []
if expected_pool:
if create_pool.get('id'):
expected_l7policy['redirect_pool_id'] = create_pool.get('id')
expected_l7policy['redirect_pool'] = expected_pool
expected_l7policies.append(expected_l7policy)
if expected_l7rules:
expected_l7policies[0]['l7rules'] = expected_l7rules
if create_l7rules:
create_l7policies[0]['l7rules'] = create_l7rules
return create_l7policies, expected_l7policies
def _get_l7rules_bodies(self, value="localhost"):
create_l7rules = [{
'type': constants.L7RULE_TYPE_HOST_NAME,
'compare_type': constants.L7RULE_COMPARE_TYPE_EQUAL_TO,
'value': value,
'invert': False
}]
expected_l7rules = [{
'key': None
}]
expected_l7rules[0].update(create_l7rules[0])
return create_l7rules, expected_l7rules
def test_with_one_listener(self):
create_listener, expected_listener = self._get_listener_bodies()
create_lb, expected_lb = self._get_lb_bodies([create_listener],
[expected_listener])
response = self.post(self.LBS_PATH, create_lb)
api_lb = response.json
self._assert_graphs_equal(expected_lb, api_lb)
def test_with_many_listeners(self):
create_listener1, expected_listener1 = self._get_listener_bodies()
create_listener2, expected_listener2 = self._get_listener_bodies(
name='listener2', protocol_port=81
)
create_lb, expected_lb = self._get_lb_bodies(
[create_listener1, create_listener2],
[expected_listener1, expected_listener2])
response = self.post(self.LBS_PATH, create_lb)
api_lb = response.json
self._assert_graphs_equal(expected_lb, api_lb)
def test_with_one_listener_one_pool(self):
create_pool, expected_pool = self._get_pool_bodies()
create_listener, expected_listener = self._get_listener_bodies(
create_default_pool=create_pool,
expected_default_pool=expected_pool
)
create_lb, expected_lb = self._get_lb_bodies([create_listener],
[expected_listener])
response = self.post(self.LBS_PATH, create_lb)
api_lb = response.json
self._assert_graphs_equal(expected_lb, api_lb)
def test_with_many_listeners_one_pool(self):
create_pool1, expected_pool1 = self._get_pool_bodies()
create_pool2, expected_pool2 = self._get_pool_bodies(name='pool2')
create_listener1, expected_listener1 = self._get_listener_bodies(
create_default_pool=create_pool1,
expected_default_pool=expected_pool1
)
create_listener2, expected_listener2 = self._get_listener_bodies(
create_default_pool=create_pool2,
expected_default_pool=expected_pool2,
name='listener2', protocol_port=81
)
create_lb, expected_lb = self._get_lb_bodies(
[create_listener1, create_listener2],
[expected_listener1, expected_listener2])
response = self.post(self.LBS_PATH, create_lb)
api_lb = response.json
self._assert_graphs_equal(expected_lb, api_lb)
def test_with_one_listener_one_member(self):
create_member, expected_member = self._get_member_bodies()
create_pool, expected_pool = self._get_pool_bodies(
create_members=[create_member],
expected_members=[expected_member])
create_listener, expected_listener = self._get_listener_bodies(
create_default_pool=create_pool,
expected_default_pool=expected_pool)
create_lb, expected_lb = self._get_lb_bodies([create_listener],
[expected_listener])
response = self.post(self.LBS_PATH, create_lb)
api_lb = response.json
self._assert_graphs_equal(expected_lb, api_lb)
def test_with_one_listener_one_hm(self):
create_hm, expected_hm = self._get_hm_bodies()
create_pool, expected_pool = self._get_pool_bodies(
create_hm=create_hm,
expected_hm=expected_hm)
create_listener, expected_listener = self._get_listener_bodies(
create_default_pool=create_pool,
expected_default_pool=expected_pool)
create_lb, expected_lb = self._get_lb_bodies([create_listener],
[expected_listener])
response = self.post(self.LBS_PATH, create_lb)
api_lb = response.json
self._assert_graphs_equal(expected_lb, api_lb)
def test_with_one_listener_sni_containers(self):
create_sni_containers, expected_sni_containers = (
self._get_sni_container_bodies())
create_listener, expected_listener = self._get_listener_bodies(
create_sni_containers=create_sni_containers,
expected_sni_containers=expected_sni_containers)
create_lb, expected_lb = self._get_lb_bodies([create_listener],
[expected_listener])
response = self.post(self.LBS_PATH, create_lb)
api_lb = response.json
self._assert_graphs_equal(expected_lb, api_lb)
def test_with_l7policy_redirect_pool_no_rule(self):
create_pool, expected_pool = self._get_pool_bodies(create_members=[],
expected_members=[])
create_l7policies, expected_l7policies = self._get_l7policies_bodies(
create_pool=create_pool, expected_pool=expected_pool)
create_listener, expected_listener = self._get_listener_bodies(
create_l7policies=create_l7policies,
expected_l7policies=expected_l7policies)
create_lb, expected_lb = self._get_lb_bodies([create_listener],
[expected_listener])
response = self.post(self.LBS_PATH, create_lb)
api_lb = response.json
self._assert_graphs_equal(expected_lb, api_lb)
def test_with_l7policy_redirect_pool_one_rule(self):
create_pool, expected_pool = self._get_pool_bodies(create_members=[],
expected_members=[])
create_l7rules, expected_l7rules = self._get_l7rules_bodies()
create_l7policies, expected_l7policies = self._get_l7policies_bodies(
create_pool=create_pool, expected_pool=expected_pool,
create_l7rules=create_l7rules, expected_l7rules=expected_l7rules)
create_listener, expected_listener = self._get_listener_bodies(
create_l7policies=create_l7policies,
expected_l7policies=expected_l7policies)
create_lb, expected_lb = self._get_lb_bodies([create_listener],
[expected_listener])
response = self.post(self.LBS_PATH, create_lb)
api_lb = response.json
self._assert_graphs_equal(expected_lb, api_lb)
def test_with_l7policy_redirect_pool_bad_rule(self):
create_pool, expected_pool = self._get_pool_bodies(create_members=[],
expected_members=[])
create_l7rules, expected_l7rules = self._get_l7rules_bodies(
value="local host")
create_l7policies, expected_l7policies = self._get_l7policies_bodies(
create_pool=create_pool, expected_pool=expected_pool,
create_l7rules=create_l7rules, expected_l7rules=expected_l7rules)
create_listener, expected_listener = self._get_listener_bodies(
create_l7policies=create_l7policies,
expected_l7policies=expected_l7policies)
create_lb, expected_lb = self._get_lb_bodies([create_listener],
[expected_listener])
self.post(self.LBS_PATH, create_lb, expect_errors=True)
def test_with_l7policies_one_redirect_pool_one_rule(self):
create_pool, expected_pool = self._get_pool_bodies(create_members=[],
expected_members=[])
create_l7rules, expected_l7rules = self._get_l7rules_bodies()
create_l7policies, expected_l7policies = self._get_l7policies_bodies(
create_pool=create_pool, expected_pool=expected_pool,
create_l7rules=create_l7rules, expected_l7rules=expected_l7rules)
c_l7policies_url, e_l7policies_url = self._get_l7policies_bodies()
for policy in c_l7policies_url:
policy['position'] = 2
create_l7policies.append(policy)
for policy in e_l7policies_url:
policy['position'] = 2
expected_l7policies.append(policy)
create_listener, expected_listener = self._get_listener_bodies(
create_l7policies=create_l7policies,
expected_l7policies=expected_l7policies)
create_lb, expected_lb = self._get_lb_bodies([create_listener],
[expected_listener])
response = self.post(self.LBS_PATH, create_lb)
api_lb = response.json
self._assert_graphs_equal(expected_lb, api_lb)
def test_with_l7policies_redirect_pools_no_rules(self):
create_pool, expected_pool = self._get_pool_bodies()
create_l7policies, expected_l7policies = self._get_l7policies_bodies(
create_pool=create_pool, expected_pool=expected_pool)
r_create_pool, r_expected_pool = self._get_pool_bodies()
c_l7policies_url, e_l7policies_url = self._get_l7policies_bodies(
create_pool=r_create_pool, expected_pool=r_expected_pool)
for policy in c_l7policies_url:
policy['position'] = 2
create_l7policies.append(policy)
for policy in e_l7policies_url:
policy['position'] = 2
expected_l7policies.append(policy)
create_listener, expected_listener = self._get_listener_bodies(
create_l7policies=create_l7policies,
expected_l7policies=expected_l7policies)
create_lb, expected_lb = self._get_lb_bodies([create_listener],
[expected_listener])
response = self.post(self.LBS_PATH, create_lb)
api_lb = response.json
self._assert_graphs_equal(expected_lb, api_lb)
def test_with_one_of_everything(self):
create_member, expected_member = self._get_member_bodies()
create_hm, expected_hm = self._get_hm_bodies()
create_pool, expected_pool = self._get_pool_bodies(
create_members=[create_member],
expected_members=[expected_member],
create_hm=create_hm,
expected_hm=expected_hm,
protocol=constants.PROTOCOL_TCP)
create_sni_containers, expected_sni_containers = (
self._get_sni_container_bodies())
create_l7rules, expected_l7rules = self._get_l7rules_bodies()
r_create_member, r_expected_member = self._get_member_bodies(
protocol_port=88)
r_create_pool, r_expected_pool = self._get_pool_bodies(
create_members=[r_create_member],
expected_members=[r_expected_member])
create_l7policies, expected_l7policies = self._get_l7policies_bodies(
create_pool=r_create_pool, expected_pool=r_expected_pool,
create_l7rules=create_l7rules, expected_l7rules=expected_l7rules)
create_listener, expected_listener = self._get_listener_bodies(
create_default_pool=create_pool,
expected_default_pool=expected_pool,
create_l7policies=create_l7policies,
expected_l7policies=expected_l7policies,
create_sni_containers=create_sni_containers,
expected_sni_containers=expected_sni_containers)
create_lb, expected_lb = self._get_lb_bodies([create_listener],
[expected_listener])
response = self.post(self.LBS_PATH, create_lb)
api_lb = response.json
self._assert_graphs_equal(expected_lb, api_lb)
def test_db_create_failure(self):
create_listener, expected_listener = self._get_listener_bodies()
create_lb, _ = self._get_lb_bodies([create_listener],
[expected_listener])
# with mock.patch('octavia.db.repositories.Repositories') as repo_mock:
with mock.patch('octavia.db.repositories.Repositories.'
'create_load_balancer_tree') as repo_mock:
repo_mock.side_effect = Exception('I am a DB Error')
response = self.post(self.LBS_PATH, create_lb, expect_errors=True)
self.assertEqual(500, response.status_code)

View File

@ -1,57 +0,0 @@
# Copyright 2016 IBM
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from octavia.common import constants
from octavia.tests.functional.api.v1 import base
from oslo_utils import uuidutils
class TestLoadBlancerStatistics(base.BaseAPITest):
FAKE_UUID_1 = uuidutils.generate_uuid()
def setUp(self):
super(TestLoadBlancerStatistics, self).setUp()
self.lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()})
self.set_lb_status(self.lb.get('id'))
self.listener = self.create_listener(self.lb.get('id'),
constants.PROTOCOL_HTTP, 80)
self.set_lb_status(self.lb.get('id'))
self.lb_path = self.LB_STATS_PATH.format(lb_id=self.lb.get('id'))
self.amphora = self.create_amphora(uuidutils.generate_uuid(),
self.lb.get('id'))
def test_get(self):
ls = self.create_listener_stats(listener_id=self.listener.get('id'),
amphora_id=self.amphora.id)
expected = {
'loadbalancer': {
'bytes_in': ls['bytes_in'],
'bytes_out': ls['bytes_out'],
'active_connections': ls['active_connections'],
'total_connections': ls['total_connections'],
'request_errors': ls['request_errors'],
'listeners': [
{'id': self.listener.get('id'),
'bytes_in': ls['bytes_in'],
'bytes_out': ls['bytes_out'],
'active_connections': ls['active_connections'],
'total_connections': ls['total_connections'],
'request_errors': ls['request_errors']}]
}
}
response = self.get(self.lb_path)
response_body = response.json
self.assertEqual(expected, response_body)

View File

@ -1,430 +0,0 @@
# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from oslo_utils import uuidutils
from octavia.common import constants
from octavia.network import base as network_base
from octavia.tests.functional.api.v1 import base
class TestMember(base.BaseAPITest):
def setUp(self):
super(TestMember, self).setUp()
self.lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()})
self.set_lb_status(self.lb.get('id'))
self.listener = self.create_listener(self.lb.get('id'),
constants.PROTOCOL_HTTP, 80)
self.set_lb_status(self.lb.get('id'))
self.pool = self.create_pool_sans_listener(
self.lb.get('id'), constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.set_lb_status(self.lb.get('id'))
self.pool_with_listener = self.create_pool(
self.lb.get('id'),
self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.set_lb_status(self.lb.get('id'))
self.members_path = self.MEMBERS_PATH.format(
lb_id=self.lb.get('id'),
pool_id=self.pool.get('id'))
self.member_path = self.members_path + '/{member_id}'
self.deprecated_members_path = self.DEPRECATED_MEMBERS_PATH.format(
lb_id=self.lb.get('id'), listener_id=self.listener.get('id'),
pool_id=self.pool.get('id'))
self.deprecated_member_path = (self.deprecated_members_path +
'/{member_id}')
def test_get(self):
api_member = self.create_member(self.lb.get('id'),
self.pool.get('id'),
'10.0.0.1', 80)
response = self.get(self.member_path.format(
member_id=api_member.get('id')))
response_body = response.json
self.assertEqual(api_member, response_body)
def test_bad_get(self):
self.get(self.member_path.format(member_id=uuidutils.generate_uuid()),
status=404)
def test_get_all(self):
api_m_1 = self.create_member(self.lb.get('id'),
self.pool.get('id'),
'10.0.0.1', 80)
self.set_lb_status(self.lb.get('id'))
api_m_2 = self.create_member(self.lb.get('id'),
self.pool.get('id'),
'10.0.0.2', 80)
self.set_lb_status(self.lb.get('id'))
# Original objects didn't have the updated operating status that exists
# in the DB.
api_m_1['operating_status'] = constants.NO_MONITOR
api_m_2['operating_status'] = constants.NO_MONITOR
response = self.get(self.members_path)
response_body = response.json
self.assertIsInstance(response_body, list)
self.assertEqual(2, len(response_body))
self.assertIn(api_m_1, response_body)
self.assertIn(api_m_2, response_body)
def test_empty_get_all(self):
response = self.get(self.members_path)
response_body = response.json
self.assertIsInstance(response_body, list)
self.assertEqual(0, len(response_body))
def test_create_sans_listener(self):
api_member = self.create_member(self.lb.get('id'),
self.pool.get('id'),
'10.0.0.1', 80)
self.assertEqual('10.0.0.1', api_member.get('ip_address'))
self.assertEqual(80, api_member.get('protocol_port'))
self.assertIsNotNone(api_member.get('created_at'))
self.assertIsNone(api_member.get('updated_at'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_create_with_id(self):
mid = uuidutils.generate_uuid()
api_member = self.create_member(self.lb.get('id'),
self.pool.get('id'),
'10.0.0.1', 80, id=mid)
self.assertEqual(mid, api_member.get('id'))
def test_create_with_project_id(self):
pid = uuidutils.generate_uuid()
api_member = self.create_member(self.lb.get('id'),
self.pool.get('id'),
'10.0.0.1', 80, project_id=pid)
self.assertEqual(self.project_id, api_member.get('project_id'))
def test_create_with_duplicate_id(self):
member = self.create_member(self.lb.get('id'),
self.pool.get('id'),
'10.0.0.1', 80)
self.set_lb_status(self.lb.get('id'), constants.ACTIVE)
path = self.MEMBERS_PATH.format(lb_id=self.lb.get('id'),
pool_id=self.pool.get('id'))
body = {'id': member.get('id'), 'ip_address': '10.0.0.3',
'protocol_port': 81}
self.post(path, body, status=409, expect_errors=True)
def test_bad_create(self):
api_member = {'name': 'test1'}
self.post(self.members_path, api_member, status=400)
def test_create_with_bad_handler(self):
self.handler_mock().member.create.side_effect = Exception()
self.create_member_with_listener(
self.lb.get('id'), self.listener.get('id'),
self.pool_with_listener.get('id'),
'10.0.0.1', 80)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ERROR)
def test_create_with_attached_listener(self):
api_member = self.create_member_with_listener(
self.lb.get('id'), self.listener.get('id'),
self.pool.get('id'), '10.0.0.1', 80)
self.assertEqual('10.0.0.1', api_member.get('ip_address'))
self.assertEqual(80, api_member.get('protocol_port'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_create_with_monitor_address_and_port(self):
api_member = self.create_member_with_listener(
self.lb.get('id'), self.listener.get('id'),
self.pool.get('id'), '10.0.0.1', 80,
monitor_address='192.0.2.2',
monitor_port=9090)
self.assertEqual('10.0.0.1', api_member.get('ip_address'))
self.assertEqual(80, api_member.get('protocol_port'))
self.assertEqual('192.0.2.2', api_member.get('monitor_address'))
self.assertEqual(9090, api_member.get('monitor_port'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_duplicate_create(self):
member = {'ip_address': '10.0.0.1', 'protocol_port': 80,
'project_id': self.project_id}
self.post(self.members_path, member, status=202)
self.set_lb_status(self.lb.get('id'))
self.post(self.members_path, member, status=409)
def test_create_with_bad_subnet(self, **optionals):
with mock.patch(
'octavia.common.utils.get_network_driver') as net_mock:
net_mock.return_value.get_subnet = mock.Mock(
side_effect=network_base.SubnetNotFound('Subnet not found'))
subnet_id = uuidutils.generate_uuid()
response = self.create_member(self.lb.get('id'),
self.pool.get('id'),
'10.0.0.1', 80, expect_error=True,
subnet_id=subnet_id)
err_msg = 'Subnet ' + subnet_id + ' not found.'
self.assertEqual(response.get('faultstring'), err_msg)
def test_create_with_valid_subnet(self, **optionals):
with mock.patch(
'octavia.common.utils.get_network_driver') as net_mock:
subnet_id = uuidutils.generate_uuid()
net_mock.return_value.get_subnet.return_value = subnet_id
response = self.create_member(self.lb.get('id'),
self.pool.get('id'),
'10.0.0.1', 80, expect_error=True,
subnet_id=subnet_id)
self.assertEqual('10.0.0.1', response.get('ip_address'))
self.assertEqual(80, response.get('protocol_port'))
self.assertEqual(subnet_id, response.get('subnet_id'))
def test_create_over_quota(self):
self.check_quota_met_true_mock.start()
self.addCleanup(self.check_quota_met_true_mock.stop)
body = {'ip_address': '10.0.0.3', 'protocol_port': 81}
self.post(self.members_path, body, status=403)
def test_update(self):
old_port = 80
new_port = 88
api_member = self.create_member_with_listener(
self.lb.get('id'), self.listener.get('id'),
self.pool.get('id'), '10.0.0.1', old_port)
self.set_lb_status(self.lb.get('id'))
new_member = {'protocol_port': new_port}
response = self.put(self.deprecated_member_path.format(
member_id=api_member.get('id')), new_member, status=202)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
response_body = response.json
self.assertEqual(old_port, response_body.get('protocol_port'))
self.assertEqual(api_member.get('created_at'),
response_body.get('created_at'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_bad_update(self):
api_member = self.create_member(self.lb.get('id'),
self.pool.get('id'),
'10.0.0.1', 80)
new_member = {'protocol_port': 'ten'}
self.put(self.member_path.format(member_id=api_member.get('id')),
new_member, expect_errors=True)
def test_update_with_bad_handler(self):
api_member = self.create_member_with_listener(
self.lb.get('id'), self.listener.get('id'),
self.pool_with_listener.get('id'),
'10.0.0.1', 80)
self.set_lb_status(self.lb.get('id'))
new_member = {'protocol_port': 88}
self.handler_mock().member.update.side_effect = Exception()
self.put(self.member_path.format(
member_id=api_member.get('id')), new_member, status=202)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ERROR)
def test_duplicate_update(self):
self.skip('This test should pass after a validation layer.')
member = {'ip_address': '10.0.0.1', 'protocol_port': 80}
self.post(self.members_path, member)
self.set_lb_status(self.lb.get('id'))
member['protocol_port'] = 81
response = self.post(self.members_path, member)
self.set_lb_status(self.lb.get('id'))
member2 = response.json
member['protocol_port'] = 80
self.put(self.member_path.format(member_id=member2.get('id')),
member, status=409)
def test_delete(self):
api_member = self.create_member_with_listener(
self.lb.get('id'), self.listener.get('id'),
self.pool_with_listener.get('id'),
'10.0.0.1', 80)
self.set_lb_status(self.lb.get('id'))
member = self.get(self.member_path.format(
member_id=api_member.get('id'))).json
api_member['operating_status'] = constants.ONLINE
self.assertIsNone(api_member.pop('updated_at'))
self.assertIsNotNone(member.pop('updated_at'))
self.assertEqual(api_member, member)
self.delete(self.member_path.format(member_id=api_member.get('id')))
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_bad_delete(self):
self.delete(self.member_path.format(
member_id=uuidutils.generate_uuid()), status=404)
def test_delete_with_bad_handler(self):
api_member = self.create_member_with_listener(
self.lb.get('id'), self.listener.get('id'),
self.pool_with_listener.get('id'),
'10.0.0.1', 80)
self.set_lb_status(self.lb.get('id'))
member = self.get(self.member_path.format(
member_id=api_member.get('id'))).json
api_member['operating_status'] = constants.ONLINE
self.assertIsNone(api_member.pop('updated_at'))
self.assertIsNotNone(member.pop('updated_at'))
self.assertEqual(api_member, member)
self.handler_mock().member.delete.side_effect = Exception()
self.delete(self.member_path.format(
member_id=api_member.get('id')))
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ERROR)
def test_create_when_lb_pending_update(self):
self.create_member(self.lb.get('id'),
self.pool.get('id'), ip_address="10.0.0.2",
protocol_port=80)
self.set_lb_status(self.lb.get('id'))
self.put(self.LB_PATH.format(lb_id=self.lb.get('id')),
body={'name': 'test_name_change'})
self.post(self.members_path,
body={'ip_address': '10.0.0.1', 'protocol_port': 80,
'project_id': self.project_id},
status=409)
def test_update_when_lb_pending_update(self):
member = self.create_member(self.lb.get('id'),
self.pool.get('id'), ip_address="10.0.0.1",
protocol_port=80)
self.set_lb_status(self.lb.get('id'))
self.put(self.LB_PATH.format(lb_id=self.lb.get('id')),
body={'name': 'test_name_change'})
self.put(self.member_path.format(member_id=member.get('id')),
body={'protocol_port': 88}, status=409)
def test_delete_when_lb_pending_update(self):
member = self.create_member(self.lb.get('id'),
self.pool.get('id'), ip_address="10.0.0.1",
protocol_port=80)
self.set_lb_status(self.lb.get('id'))
self.put(self.LB_PATH.format(lb_id=self.lb.get('id')),
body={'name': 'test_name_change'})
self.delete(self.member_path.format(member_id=member.get('id')),
status=409)
def test_create_when_lb_pending_delete(self):
self.create_member(self.lb.get('id'),
self.pool.get('id'), ip_address="10.0.0.1",
protocol_port=80)
self.set_lb_status(self.lb.get('id'))
self.delete(self.LB_DELETE_CASCADE_PATH.format(
lb_id=self.lb.get('id')))
self.post(self.members_path,
body={'ip_address': '10.0.0.2', 'protocol_port': 88,
'project_id': self.project_id},
status=409)
def test_update_when_lb_pending_delete(self):
member = self.create_member(self.lb.get('id'),
self.pool.get('id'), ip_address="10.0.0.1",
protocol_port=80)
self.set_lb_status(self.lb.get('id'))
self.delete(self.LB_DELETE_CASCADE_PATH.format(
lb_id=self.lb.get('id')))
self.put(self.member_path.format(member_id=member.get('id')),
body={'protocol_port': 88}, status=409)
def test_delete_when_lb_pending_delete(self):
member = self.create_member(self.lb.get('id'),
self.pool.get('id'), ip_address="10.0.0.1",
protocol_port=80)
self.set_lb_status(self.lb.get('id'))
self.delete(self.LB_DELETE_CASCADE_PATH.format(
lb_id=self.lb.get('id')))
self.delete(self.member_path.format(member_id=member.get('id')),
status=409)

View File

@ -1,628 +0,0 @@
# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_utils import uuidutils
from octavia.common import constants
from octavia.tests.functional.api.v1 import base
class TestPool(base.BaseAPITest):
def setUp(self):
super(TestPool, self).setUp()
self.lb = self.create_load_balancer(
{'subnet_id': uuidutils.generate_uuid()})
self.lb = self.set_lb_status(self.lb.get('id'))
self.listener = self.create_listener(self.lb.get('id'),
constants.PROTOCOL_HTTP, 80)
self.lb = self.set_lb_status(self.lb.get('id'))
self.listener = self.get_listener(self.lb.get('id'),
self.listener.get('id'))
self.pools_path = self.POOLS_PATH.format(lb_id=self.lb.get('id'))
self.pool_path = self.pools_path + '/{pool_id}'
self.pools_path_with_listener = (self.pools_path +
'?listener_id={listener_id}')
self.pools_path_deprecated = self.DEPRECATED_POOLS_PATH.format(
lb_id=self.lb.get('id'), listener_id=self.listener.get('id'))
self.pool_path_deprecated = self.pools_path_deprecated + '/{pool_id}'
def test_get(self):
api_pool = self.create_pool_sans_listener(
self.lb.get('id'), constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.set_lb_status(lb_id=self.lb.get('id'))
response = self.get(self.pool_path.format(pool_id=api_pool.get('id')))
response_body = response.json
self.assertEqual(api_pool, response_body)
def test_bad_get(self):
self.get(self.pool_path.format(pool_id=uuidutils.generate_uuid()),
status=404)
def test_get_all(self):
api_pool = self.create_pool_sans_listener(
self.lb.get('id'), constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.set_lb_status(lb_id=self.lb.get('id'))
response = self.get(self.pools_path)
response_body = response.json
self.assertIsInstance(response_body, list)
self.assertEqual(1, len(response_body))
self.assertEqual(api_pool.get('id'), response_body[0].get('id'))
def test_get_all_with_listener(self):
api_pool = self.create_pool(self.lb.get('id'),
self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.set_lb_status(lb_id=self.lb.get('id'))
response = self.get(self.pools_path_with_listener.format(
listener_id=self.listener.get('id')))
response_body = response.json
self.assertIsInstance(response_body, list)
self.assertEqual(1, len(response_body))
self.assertEqual(api_pool.get('id'), response_body[0].get('id'))
def test_get_all_with_bad_listener(self):
self.get(self.pools_path_with_listener.format(
listener_id='bad_id'), status=404, expect_errors=True)
def test_empty_get_all(self):
response = self.get(self.pools_path)
response_body = response.json
self.assertIsInstance(response_body, list)
self.assertEqual(0, len(response_body))
def test_create(self):
api_pool = self.create_pool(self.lb.get('id'),
self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
self.assertEqual(constants.PROTOCOL_HTTP, api_pool.get('protocol'))
self.assertEqual(constants.LB_ALGORITHM_ROUND_ROBIN,
api_pool.get('lb_algorithm'))
self.assertIsNotNone(api_pool.get('created_at'))
self.assertIsNone(api_pool.get('updated_at'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_create_with_proxy_protocol(self):
api_pool = self.create_pool(self.lb.get('id'),
self.listener.get('id'),
constants.PROTOCOL_PROXY,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
self.assertEqual(constants.PROTOCOL_PROXY, api_pool.get('protocol'))
self.assertEqual(constants.LB_ALGORITHM_ROUND_ROBIN,
api_pool.get('lb_algorithm'))
self.assertIsNotNone(api_pool.get('created_at'))
self.assertIsNone(api_pool.get('updated_at'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_create_sans_listener(self):
api_pool = self.create_pool_sans_listener(
self.lb.get('id'), constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.assertEqual(constants.PROTOCOL_HTTP, api_pool.get('protocol'))
self.assertEqual(constants.LB_ALGORITHM_ROUND_ROBIN,
api_pool.get('lb_algorithm'))
# Make sure listener status is unchanged, but LB status is changed.
# LB should still be locked even with pool and subordinate object
# updates.
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
def test_create_with_listener_id_in_pool_dict(self):
api_pool = self.create_pool_sans_listener(
self.lb.get('id'), constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN,
listener_id=self.listener.get('id'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
self.assertEqual(constants.PROTOCOL_HTTP, api_pool.get('protocol'))
self.assertEqual(constants.LB_ALGORITHM_ROUND_ROBIN,
api_pool.get('lb_algorithm'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_create_with_id(self):
pid = uuidutils.generate_uuid()
api_pool = self.create_pool(self.lb.get('id'),
self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN,
id=pid)
self.assertEqual(pid, api_pool.get('id'))
def test_create_with_project_id(self):
pid = uuidutils.generate_uuid()
api_pool = self.create_pool(self.lb.get('id'),
self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN,
project_id=pid)
self.assertEqual(self.project_id, api_pool.get('project_id'))
def test_create_with_duplicate_id(self):
pool = self.create_pool(self.lb.get('id'),
self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.set_lb_status(self.lb.get('id'), constants.ACTIVE)
path = self.POOLS_PATH.format(lb_id=self.lb.get('id'),
listener_id=self.listener.get('id'))
body = {'id': pool.get('id'), 'protocol': constants.PROTOCOL_HTTP,
'lb_algorithm': constants.LB_ALGORITHM_ROUND_ROBIN,
'project_id': self.project_id}
self.post(path, body, status=409, expect_errors=True)
def test_bad_create(self):
api_pool = {'name': 'test1'}
self.post(self.pools_path, api_pool, status=400)
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_create_with_listener_with_default_pool_id_set(self):
self.create_pool(self.lb.get('id'),
self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.set_lb_status(self.lb.get('id'), constants.ACTIVE)
path = self.pools_path_deprecated.format(
lb_id=self.lb.get('id'), listener_id=self.listener.get('id'))
body = {'protocol': constants.PROTOCOL_HTTP,
'lb_algorithm': constants.LB_ALGORITHM_ROUND_ROBIN,
'project_id': self.project_id}
self.post(path, body, status=409, expect_errors=True)
def test_create_bad_protocol(self):
pool = {'protocol': 'STUPID_PROTOCOL',
'lb_algorithm': constants.LB_ALGORITHM_ROUND_ROBIN}
self.post(self.pools_path, pool, status=400)
def test_create_with_bad_handler(self):
self.handler_mock().pool.create.side_effect = Exception()
self.create_pool(self.lb.get('id'),
self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ERROR)
def test_create_over_quota(self):
self.check_quota_met_true_mock.start()
self.addCleanup(self.check_quota_met_true_mock.stop)
body = {'protocol': constants.PROTOCOL_HTTP,
'lb_algorithm': constants.LB_ALGORITHM_ROUND_ROBIN,
'project_id': self.project_id}
self.post(self.pools_path, body, status=403)
def test_update(self):
api_pool = self.create_pool(self.lb.get('id'),
self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.set_lb_status(lb_id=self.lb.get('id'))
new_pool = {'name': 'new_name'}
self.put(self.pool_path.format(pool_id=api_pool.get('id')),
new_pool, status=202)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
response = self.get(self.pool_path.format(pool_id=api_pool.get('id')))
response_body = response.json
self.assertNotEqual('new_name', response_body.get('name'))
self.assertIsNotNone(response_body.get('created_at'))
self.assertIsNotNone(response_body.get('updated_at'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_bad_update(self):
api_pool = self.create_pool(self.lb.get('id'),
self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.set_lb_status(self.lb.get('id'))
new_pool = {'enabled': 'one'}
self.put(self.pool_path.format(pool_id=api_pool.get('id')),
new_pool, status=400)
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_update_with_bad_handler(self):
api_pool = self.create_pool(self.lb.get('id'),
self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.set_lb_status(lb_id=self.lb.get('id'))
new_pool = {'name': 'new_name'}
self.handler_mock().pool.update.side_effect = Exception()
self.put(self.pool_path.format(pool_id=api_pool.get('id')),
new_pool, status=202)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ERROR)
def test_delete(self):
api_pool = self.create_pool(self.lb.get('id'),
self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.set_lb_status(lb_id=self.lb.get('id'))
api_pool['operating_status'] = constants.ONLINE
response = self.get(self.pool_path.format(
pool_id=api_pool.get('id')))
pool = response.json
self.assertIsNone(api_pool.pop('updated_at'))
self.assertIsNotNone(pool.pop('updated_at'))
self.assertEqual(api_pool, pool)
self.delete(self.pool_path.format(pool_id=api_pool.get('id')))
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_bad_delete(self):
self.delete(self.pool_path.format(
pool_id=uuidutils.generate_uuid()), status=404)
def test_delete_with_l7policy(self):
api_pool = self.create_pool_sans_listener(
self.lb.get('id'), constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.set_lb_status(lb_id=self.lb.get('id'))
self.create_l7policy(self.lb.get('id'), self.listener.get('id'),
constants.L7POLICY_ACTION_REDIRECT_TO_POOL,
redirect_pool_id=api_pool.get('id'))
self.set_lb_status(lb_id=self.lb.get('id'))
self.delete(self.pool_path.format(
pool_id=api_pool.get('id')), status=409)
def test_delete_with_bad_handler(self):
api_pool = self.create_pool(self.lb.get('id'),
self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.set_lb_status(lb_id=self.lb.get('id'))
api_pool['operating_status'] = constants.ONLINE
response = self.get(self.pool_path.format(
pool_id=api_pool.get('id')))
pool = response.json
self.assertIsNone(api_pool.pop('updated_at'))
self.assertIsNotNone(pool.pop('updated_at'))
self.assertEqual(api_pool, pool)
self.handler_mock().pool.delete.side_effect = Exception()
self.delete(self.pool_path.format(pool_id=api_pool.get('id')))
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ERROR)
def test_create_with_session_persistence(self):
sp = {"type": constants.SESSION_PERSISTENCE_HTTP_COOKIE,
"cookie_name": "test_cookie_name"}
api_pool = self.create_pool(self.lb.get('id'),
self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN,
session_persistence=sp)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
response = self.get(self.pool_path.format(
pool_id=api_pool.get('id')))
response_body = response.json
sess_p = response_body.get('session_persistence')
self.assertIsNotNone(sess_p)
self.assertEqual(constants.SESSION_PERSISTENCE_HTTP_COOKIE,
sess_p.get('type'))
self.assertEqual('test_cookie_name', sess_p.get('cookie_name'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_create_with_bad_session_persistence(self):
sp = {"type": "persistence_type",
"cookie_name": "test_cookie_name"}
pool = {'protocol': constants.PROTOCOL_HTTP,
'lb_algorithm': constants.LB_ALGORITHM_ROUND_ROBIN,
'session_persistence': sp}
self.post(self.pools_path, pool, status=400)
def test_add_session_persistence(self):
sp = {"type": constants.SESSION_PERSISTENCE_HTTP_COOKIE,
"cookie_name": "test_cookie_name"}
api_pool = self.create_pool(self.lb.get('id'),
self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.set_lb_status(lb_id=self.lb.get('id'))
response = self.put(self.pool_path.format(pool_id=api_pool.get('id')),
body={'session_persistence': sp})
api_pool = response.json
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assertNotEqual(sp, api_pool.get('session_persistence'))
def test_update_session_persistence(self):
sp = {"type": constants.SESSION_PERSISTENCE_HTTP_COOKIE,
"cookie_name": "test_cookie_name"}
api_pool = self.create_pool(self.lb.get('id'),
self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN,
session_persistence=sp)
self.set_lb_status(lb_id=self.lb.get('id'))
response = self.get(self.pool_path.format(
pool_id=api_pool.get('id')))
response_body = response.json
sess_p = response_body.get('session_persistence')
sess_p['cookie_name'] = 'new_test_cookie_name'
api_pool = self.put(self.pool_path.format(pool_id=api_pool.get('id')),
body={'session_persistence': sess_p}).json
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assertNotEqual(sess_p, api_pool.get('session_persistence'))
self.set_lb_status(self.lb.get('id'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_update_preserve_session_persistence(self):
sp = {"type": constants.SESSION_PERSISTENCE_HTTP_COOKIE,
"cookie_name": "test_cookie_name"}
api_pool = self.create_pool(self.lb.get('id'),
self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN,
session_persistence=sp)
self.set_lb_status(lb_id=self.lb.get('id'))
pool_update = {'lb_algorithm': constants.LB_ALGORITHM_SOURCE_IP}
api_pool = self.put(self.pool_path.format(pool_id=api_pool.get('id')),
body=pool_update).json
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
response = self.get(self.pool_path.format(
pool_id=api_pool.get('id'))).json
self.assertEqual(sp, response.get('session_persistence'))
def test_update_bad_session_persistence(self):
self.skip('This test should pass after a validation layer.')
sp = {"type": constants.SESSION_PERSISTENCE_HTTP_COOKIE,
"cookie_name": "test_cookie_name"}
api_pool = self.create_pool(self.lb.get('id'),
self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN,
session_persistence=sp)
self.set_lb_status(lb_id=self.lb.get('id'))
response = self.get(self.pool_path.format(
pool_id=api_pool.get('id')))
response_body = response.json
sess_p = response_body.get('session_persistence')
sess_p['type'] = 'persistence_type'
self.put(self.pool_path.format(pool_id=api_pool.get('id')),
body={'session_persistence': sess_p}, status=400)
def test_delete_with_session_persistence(self):
sp = {"type": constants.SESSION_PERSISTENCE_HTTP_COOKIE,
"cookie_name": "test_cookie_name"}
api_pool = self.create_pool(self.lb.get('id'),
self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN,
session_persistence=sp)
self.set_lb_status(lb_id=self.lb.get('id'))
self.delete(self.pool_path.format(pool_id=api_pool.get('id')),
status=202)
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.set_lb_status(self.lb.get('id'))
self.assert_correct_lb_status(self.lb.get('id'),
constants.ACTIVE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.ACTIVE, constants.ONLINE)
def test_delete_session_persistence(self):
sp = {"type": constants.SESSION_PERSISTENCE_HTTP_COOKIE,
"cookie_name": "test_cookie_name"}
api_pool = self.create_pool(self.lb.get('id'),
self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN,
session_persistence=sp)
self.set_lb_status(lb_id=self.lb.get('id'))
sp = {'session_persistence': None}
api_pool = self.put(self.pool_path.format(pool_id=api_pool.get('id')),
body=sp, status=202).json
self.assert_correct_lb_status(self.lb.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assert_correct_listener_status(self.lb.get('id'),
self.listener.get('id'),
constants.PENDING_UPDATE,
constants.ONLINE)
self.assertIsNotNone(api_pool.get('session_persistence'))
def test_create_when_lb_pending_update(self):
self.put(self.LB_PATH.format(lb_id=self.lb.get('id')),
body={'name': 'test_name_change'})
self.post(self.pools_path,
body={'protocol': constants.PROTOCOL_HTTP,
'lb_algorithm': constants.LB_ALGORITHM_ROUND_ROBIN,
'project_id': self.project_id},
status=409)
def test_update_when_lb_pending_update(self):
pool = self.create_pool(self.lb.get('id'), self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.set_lb_status(self.lb.get('id'))
self.put(self.LB_PATH.format(lb_id=self.lb.get('id')),
body={'name': 'test_name_change'})
self.put(self.pool_path.format(pool_id=pool.get('id')),
body={'protocol': constants.PROTOCOL_HTTPS},
status=409)
def test_delete_when_lb_pending_update(self):
pool = self.create_pool(self.lb.get('id'), self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.set_lb_status(self.lb.get('id'))
self.put(self.LB_PATH.format(lb_id=self.lb.get('id')),
body={'name': 'test_name_change'})
self.delete(self.pool_path.format(pool_id=pool.get('id')), status=409)
def test_create_when_lb_pending_delete(self):
self.delete(self.LB_DELETE_CASCADE_PATH.format(
lb_id=self.lb.get('id')))
self.post(self.pools_path,
body={'protocol': constants.PROTOCOL_HTTP,
'lb_algorithm': constants.LB_ALGORITHM_ROUND_ROBIN,
'project_id': self.project_id},
status=409)
def test_update_when_lb_pending_delete(self):
pool = self.create_pool(self.lb.get('id'), self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.set_lb_status(self.lb.get('id'))
self.delete(self.LB_DELETE_CASCADE_PATH.format(
lb_id=self.lb.get('id')))
self.put(self.pool_path.format(pool_id=pool.get('id')),
body={'protocol': constants.PROTOCOL_HTTPS},
status=409)
def test_delete_when_lb_pending_delete(self):
pool = self.create_pool(self.lb.get('id'), self.listener.get('id'),
constants.PROTOCOL_HTTP,
constants.LB_ALGORITHM_ROUND_ROBIN)
self.set_lb_status(self.lb.get('id'))
self.delete(self.LB_DELETE_CASCADE_PATH.format(
lb_id=self.lb.get('id')))
self.delete(self.pool_path.format(pool_id=pool.get('id')), status=409)

View File

@ -1,155 +0,0 @@
# Copyright 2016 Rackspace
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import random
from oslo_config import cfg
from oslo_config import fixture as oslo_fixture
from oslo_utils import uuidutils
from octavia.common import constants as const
from octavia.tests.functional.api.v1 import base
CONF = cfg.CONF
class TestQuotas(base.BaseAPITest):
def setUp(self):
super(TestQuotas, self).setUp()
conf = self.useFixture(oslo_fixture.Config(cfg.CONF))
conf.config(
group="quotas",
default_load_balancer_quota=random.randrange(const.QUOTA_UNLIMITED,
9000))
conf.config(
group="quotas",
default_listener_quota=random.randrange(const.QUOTA_UNLIMITED,
9000))
conf.config(
group="quotas",
default_member_quota=random.randrange(const.QUOTA_UNLIMITED, 9000))
# We need to make sure unlimited gets tested each pass
conf.config(group="quotas", default_pool_quota=const.QUOTA_UNLIMITED)
conf.config(
group="quotas",
default_health_monitor_quota=random.randrange(
const.QUOTA_UNLIMITED, 9000))
self.project_id = uuidutils.generate_uuid()
def _assert_quotas_equal(self, observed, expected=None):
if not expected:
expected = {'load_balancer':
CONF.quotas.default_load_balancer_quota,
'listener': CONF.quotas.default_listener_quota,
'pool': CONF.quotas.default_pool_quota,
'health_monitor':
CONF.quotas.default_health_monitor_quota,
'member': CONF.quotas.default_member_quota}
self.assertEqual(expected['load_balancer'], observed['load_balancer'])
self.assertEqual(expected['listener'], observed['listener'])
self.assertEqual(expected['pool'], observed['pool'])
self.assertEqual(expected['health_monitor'],
observed['health_monitor'])
self.assertEqual(expected['member'], observed['member'])
def test_get_all_quotas_no_quotas(self):
response = self.get(self.QUOTAS_PATH)
quota_list = response.json
self.assertEqual({'quotas': []}, quota_list)
def test_get_all_quotas_with_quotas(self):
project_id1 = uuidutils.generate_uuid()
project_id2 = uuidutils.generate_uuid()
quota_path1 = self.QUOTA_PATH.format(project_id=project_id1)
quota1 = {'load_balancer': const.QUOTA_UNLIMITED, 'listener': 30,
'pool': 30, 'health_monitor': 30, 'member': 30}
body1 = {'quota': quota1}
self.put(quota_path1, body1)
quota_path2 = self.QUOTA_PATH.format(project_id=project_id2)
quota2 = {'load_balancer': 50, 'listener': 50, 'pool': 50,
'health_monitor': 50, 'member': 50}
body2 = {'quota': quota2}
self.put(quota_path2, body2)
response = self.get(self.QUOTAS_PATH)
quota_list = response.json
quota1['project_id'] = project_id1
quota1['tenant_id'] = project_id1
quota2['project_id'] = project_id2
quota2['tenant_id'] = project_id2
expected = {'quotas': [quota1, quota2]}
self.assertEqual(expected, quota_list)
def test_get_default_quotas(self):
response = self.get(self.QUOTA_DEFAULT_PATH.format(
project_id=self.project_id))
quota_dict = response.json
self._assert_quotas_equal(quota_dict['quota'])
def test_custom_quotas(self):
quota_path = self.QUOTA_PATH.format(project_id=self.project_id)
body = {'quota': {'load_balancer': 30, 'listener': 30, 'pool': 30,
'health_monitor': 30, 'member': 30}}
self.put(quota_path, body)
response = self.get(quota_path)
quota_dict = response.json
self._assert_quotas_equal(quota_dict['quota'], expected=body['quota'])
def test_custom_partial_quotas(self):
quota_path = self.QUOTA_PATH.format(project_id=self.project_id)
body = {'quota': {'load_balancer': 30, 'listener': None, 'pool': 30,
'health_monitor': 30, 'member': 30}}
expected_body = {'quota': {
'load_balancer': 30,
'listener': CONF.quotas.default_listener_quota, 'pool': 30,
'health_monitor': 30, 'member': 30}}
self.put(quota_path, body)
response = self.get(quota_path)
quota_dict = response.json
self._assert_quotas_equal(quota_dict['quota'],
expected=expected_body['quota'])
def test_custom_missing_quotas(self):
quota_path = self.QUOTA_PATH.format(project_id=self.project_id)
body = {'quota': {'load_balancer': 30, 'pool': 30,
'health_monitor': 30, 'member': 30}}
expected_body = {'quota': {
'load_balancer': 30,
'listener': CONF.quotas.default_listener_quota, 'pool': 30,
'health_monitor': 30, 'member': 30}}
self.put(quota_path, body)
response = self.get(quota_path)
quota_dict = response.json
self._assert_quotas_equal(quota_dict['quota'],
expected=expected_body['quota'])
def test_delete_custom_quotas(self):
quota_path = self.QUOTA_PATH.format(project_id=self.project_id)
body = {'quota': {'load_balancer': 30, 'listener': 30, 'pool': 30,
'health_monitor': 30, 'member': 30}}
self.put(quota_path, body)
response = self.get(quota_path)
quota_dict = response.json
self._assert_quotas_equal(quota_dict['quota'], expected=body['quota'])
self.delete(quota_path)
response = self.get(quota_path)
quota_dict = response.json
self._assert_quotas_equal(quota_dict['quota'])
def test_delete_non_existent_custom_quotas(self):
quota_path = self.QUOTA_PATH.format(project_id='bogus')
self.delete(quota_path, status=404)

View File

@ -1,6 +0,0 @@
===============================================
Tempest Integration of octavia
===============================================
This directory contains Tempest tests to cover the octavia project.

View File

@ -1,11 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,11 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,872 +0,0 @@
# Copyright 2012 OpenStack Foundation
# Copyright 2013 IBM Corp.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import subprocess
import netaddr
from oslo_log import log
from oslo_utils import netutils
from tempest.common import compute
from tempest.common.utils.linux import remote_client
from tempest.common.utils import net_utils
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib.common.utils import test_utils
from tempest.lib import exceptions as lib_exc
import tempest.test
CONF = config.CONF
LOG = log.getLogger(__name__)
class ScenarioTest(tempest.test.BaseTestCase):
"""Base class for scenario tests. Uses tempest own clients. """
credentials = ['primary']
@classmethod
def setup_clients(cls):
super(ScenarioTest, cls).setup_clients()
# Clients (in alphabetical order)
cls.keypairs_client = cls.manager.keypairs_client
cls.servers_client = cls.manager.servers_client
# Neutron network client
cls.networks_client = cls.manager.networks_client
cls.ports_client = cls.manager.ports_client
cls.routers_client = cls.manager.routers_client
cls.subnets_client = cls.manager.subnets_client
cls.floating_ips_client = cls.manager.floating_ips_client
cls.security_groups_client = cls.manager.security_groups_client
cls.security_group_rules_client = (
cls.manager.security_group_rules_client)
# ## Test functions library
#
# The create_[resource] functions only return body and discard the
# resp part which is not used in scenario tests
def _create_port(self, network_id, client=None, namestart='port-quotatest',
**kwargs):
if not client:
client = self.ports_client
name = data_utils.rand_name(namestart)
result = client.create_port(
name=name,
network_id=network_id,
**kwargs)
self.assertIsNotNone(result, 'Unable to allocate port')
port = result['port']
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
client.delete_port, port['id'])
return port
def create_keypair(self, client=None):
if not client:
client = self.keypairs_client
name = data_utils.rand_name(self.__class__.__name__)
# We don't need to create a keypair by pubkey in scenario
body = client.create_keypair(name=name)
self.addCleanup(client.delete_keypair, name)
return body['keypair']
def create_server(self, name=None, image_id=None, flavor=None,
validatable=False, wait_until='ACTIVE',
clients=None, **kwargs):
"""Wrapper utility that returns a test server.
This wrapper utility calls the common create test server and
returns a test server. The purpose of this wrapper is to minimize
the impact on the code of the tests already using this
function.
"""
# NOTE(jlanoux): As a first step, ssh checks in the scenario
# tests need to be run regardless of the run_validation and
# validatable parameters and thus until the ssh validation job
# becomes voting in CI. The test resources management and IP
# association are taken care of in the scenario tests.
# Therefore, the validatable parameter is set to false in all
# those tests. In this way create_server just return a standard
# server and the scenario tests always perform ssh checks.
# Needed for the cross_tenant_traffic test:
if clients is None:
clients = self.manager
if name is None:
name = data_utils.rand_name(self.__class__.__name__ + "-server")
vnic_type = CONF.network.port_vnic_type
# If vnic_type is configured create port for
# every network
if vnic_type:
ports = []
create_port_body = {'binding:vnic_type': vnic_type,
'namestart': 'port-smoke'}
if kwargs:
# Convert security group names to security group ids
# to pass to create_port
if 'security_groups' in kwargs:
security_groups = (
clients.security_groups_client.list_security_groups(
).get('security_groups'))
sec_dict = dict([(s['name'], s['id'])
for s in security_groups])
sec_groups_names = [s['name'] for s in kwargs.pop(
'security_groups')]
security_groups_ids = [sec_dict[s]
for s in sec_groups_names]
if security_groups_ids:
create_port_body[
'security_groups'] = security_groups_ids
networks = kwargs.pop('networks', [])
else:
networks = []
# If there are no networks passed to us we look up
# for the project's private networks and create a port.
# The same behaviour as we would expect when passing
# the call to the clients with no networks
if not networks:
networks = clients.networks_client.list_networks(
**{'router:external': False, 'fields': 'id'})['networks']
# It's net['uuid'] if networks come from kwargs
# and net['id'] if they come from
# clients.networks_client.list_networks
for net in networks:
net_id = net.get('uuid', net.get('id'))
if 'port' not in net:
port = self._create_port(network_id=net_id,
client=clients.ports_client,
**create_port_body)
ports.append({'port': port['id']})
else:
ports.append({'port': net['port']})
if ports:
kwargs['networks'] = ports
self.ports = ports
tenant_network = self.get_tenant_network()
body, servers = compute.create_test_server(
clients,
tenant_network=tenant_network,
wait_until=wait_until,
name=name, flavor=flavor,
image_id=image_id, **kwargs)
self.addCleanup(waiters.wait_for_server_termination,
clients.servers_client, body['id'])
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
clients.servers_client.delete_server, body['id'])
server = clients.servers_client.show_server(body['id'])['server']
return server
def get_remote_client(self, ip_address, username=None, private_key=None):
"""Get a SSH client to a remote server
@param ip_address the server floating or fixed IP address to use
for ssh validation
@param username name of the Linux account on the remote server
@param private_key the SSH private key to use
@return a RemoteClient object
"""
if username is None:
username = CONF.validation.image_ssh_user
# Set this with 'keypair' or others to log in with keypair or
# username/password.
if CONF.validation.auth_method == 'keypair':
password = None
if private_key is None:
private_key = self.keypair['private_key']
else:
password = CONF.validation.image_ssh_password
private_key = None
linux_client = remote_client.RemoteClient(ip_address, username,
pkey=private_key,
password=password)
try:
linux_client.validate_authentication()
except Exception as e:
message = ('Initializing SSH connection to %(ip)s failed. '
'Error: %(error)s' % {'ip': ip_address,
'error': e})
caller = test_utils.find_test_caller()
if caller:
message = '(%s) %s' % (caller, message)
LOG.exception(message)
self._log_console_output()
raise
return linux_client
def _log_console_output(self, servers=None):
if not CONF.compute_feature_enabled.console_output:
LOG.debug('Console output not supported, cannot log')
return
if not servers:
servers = self.servers_client.list_servers()
servers = servers['servers']
for server in servers:
try:
console_output = self.servers_client.get_console_output(
server['id'])['output']
LOG.debug('Console output for %s\nbody=\n%s',
server['id'], console_output)
except lib_exc.NotFound:
LOG.debug("Server %s disappeared(deleted) while looking "
"for the console log", server['id'])
def _log_net_info(self, exc):
# network debug is called as part of ssh init
if not isinstance(exc, lib_exc.SSHTimeout):
LOG.debug('Network information on a devstack host')
def ping_ip_address(self, ip_address, should_succeed=True,
ping_timeout=None, mtu=None):
timeout = ping_timeout or CONF.validation.ping_timeout
cmd = ['ping', '-c1', '-w1']
if mtu:
cmd += [
# don't fragment
'-M', 'do',
# ping receives just the size of ICMP payload
'-s', str(net_utils.get_ping_payload_size(mtu, 4))
]
cmd.append(ip_address)
def ping():
proc = subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
proc.communicate()
return (proc.returncode == 0) == should_succeed
caller = test_utils.find_test_caller()
LOG.debug('%(caller)s begins to ping %(ip)s in %(timeout)s sec and the'
' expected result is %(should_succeed)s', {
'caller': caller, 'ip': ip_address, 'timeout': timeout,
'should_succeed':
'reachable' if should_succeed else 'unreachable'
})
result = test_utils.call_until_true(ping, timeout, 1)
LOG.debug('%(caller)s finishes ping %(ip)s in %(timeout)s sec and the '
'ping result is %(result)s', {
'caller': caller, 'ip': ip_address, 'timeout': timeout,
'result': 'expected' if result else 'unexpected'
})
return result
def check_vm_connectivity(self, ip_address,
username=None,
private_key=None,
should_connect=True,
mtu=None):
"""Check server connectivity
:param ip_address: server to test against
:param username: server's ssh username
:param private_key: server's ssh private key to be used
:param should_connect: True/False indicates positive/negative test
positive - attempt ping and ssh
negative - attempt ping and fail if succeed
:param mtu: network MTU to use for connectivity validation
:raises AssertError: if the result of the connectivity check does
not match the value of the should_connect param
"""
if should_connect:
msg = "Timed out waiting for %s to become reachable" % ip_address
else:
msg = "ip address %s is reachable" % ip_address
self.assertTrue(self.ping_ip_address(ip_address,
should_succeed=should_connect,
mtu=mtu),
msg=msg)
if should_connect:
# no need to check ssh for negative connectivity
self.get_remote_client(ip_address, username, private_key)
def check_public_network_connectivity(self, ip_address, username,
private_key, should_connect=True,
msg=None, servers=None, mtu=None):
# The target login is assumed to have been configured for
# key-based authentication by cloud-init.
LOG.debug('checking network connections to IP %s with user: %s',
ip_address, username)
try:
self.check_vm_connectivity(ip_address,
username,
private_key,
should_connect=should_connect,
mtu=mtu)
except Exception:
ex_msg = 'Public network connectivity check failed'
if msg:
ex_msg += ": " + msg
LOG.exception(ex_msg)
self._log_console_output(servers)
raise
class NetworkScenarioTest(ScenarioTest):
"""Base class for network scenario tests.
This class provide helpers for network scenario tests, using the neutron
API. Helpers from ancestor which use the nova network API are overridden
with the neutron API.
This Class also enforces using Neutron instead of novanetwork.
Subclassed tests will be skipped if Neutron is not enabled
"""
credentials = ['primary', 'admin']
@classmethod
def skip_checks(cls):
super(NetworkScenarioTest, cls).skip_checks()
if not CONF.service_available.neutron:
raise cls.skipException('Neutron not available')
def _create_network(self, networks_client=None,
tenant_id=None,
namestart='network-smoke-',
port_security_enabled=True):
if not networks_client:
networks_client = self.networks_client
if not tenant_id:
tenant_id = networks_client.tenant_id
name = data_utils.rand_name(namestart)
network_kwargs = dict(name=name, tenant_id=tenant_id)
# Neutron disables port security by default so we have to check the
# config before trying to create the network with port_security_enabled
if CONF.network_feature_enabled.port_security:
network_kwargs['port_security_enabled'] = port_security_enabled
result = networks_client.create_network(**network_kwargs)
network = result['network']
self.assertEqual(network['name'], name)
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
networks_client.delete_network,
network['id'])
return network
def _create_subnet(self, network, subnets_client=None,
routers_client=None, namestart='subnet-smoke',
**kwargs):
"""Create a subnet for the given network
within the cidr block configured for tenant networks.
"""
if not subnets_client:
subnets_client = self.subnets_client
if not routers_client:
routers_client = self.routers_client
def cidr_in_use(cidr, tenant_id):
"""Check cidr existence
:returns: True if subnet with cidr already exist in tenant
False else
"""
cidr_in_use = self.os_admin.subnets_client.list_subnets(
tenant_id=tenant_id, cidr=cidr)['subnets']
return len(cidr_in_use) != 0
ip_version = kwargs.pop('ip_version', 4)
if ip_version == 6:
tenant_cidr = netaddr.IPNetwork(
CONF.network.project_network_v6_cidr)
num_bits = CONF.network.project_network_v6_mask_bits
else:
tenant_cidr = netaddr.IPNetwork(CONF.network.project_network_cidr)
num_bits = CONF.network.project_network_mask_bits
result = None
str_cidr = None
# Repeatedly attempt subnet creation with sequential cidr
# blocks until an unallocated block is found.
for subnet_cidr in tenant_cidr.subnet(num_bits):
str_cidr = str(subnet_cidr)
if cidr_in_use(str_cidr, tenant_id=network['tenant_id']):
continue
subnet = dict(
name=data_utils.rand_name(namestart),
network_id=network['id'],
tenant_id=network['tenant_id'],
cidr=str_cidr,
ip_version=ip_version,
**kwargs
)
try:
result = subnets_client.create_subnet(**subnet)
break
except lib_exc.Conflict as e:
is_overlapping_cidr = 'overlaps with another subnet' in str(e)
if not is_overlapping_cidr:
raise
self.assertIsNotNone(result, 'Unable to allocate tenant network')
subnet = result['subnet']
self.assertEqual(subnet['cidr'], str_cidr)
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
subnets_client.delete_subnet, subnet['id'])
return subnet
def _get_server_port_id_and_ip4(self, server, ip_addr=None):
if ip_addr:
ports = self.os_admin.ports_client.list_ports(
device_id=server['id'],
fixed_ips='ip_address=%s' % ip_addr)['ports']
else:
ports = self.os_admin.ports_client.list_ports(
device_id=server['id'])['ports']
# A port can have more than one IP address in some cases.
# If the network is dual-stack (IPv4 + IPv6), this port is associated
# with 2 subnets
p_status = ['ACTIVE']
# NOTE(vsaienko) With Ironic, instances live on separate hardware
# servers. Neutron does not bind ports for Ironic instances, as a
# result the port remains in the DOWN state.
# TODO(vsaienko) remove once bug: #1599836 is resolved.
if getattr(CONF.service_available, 'ironic', False):
p_status.append('DOWN')
port_map = [(p["id"], fxip["ip_address"])
for p in ports
for fxip in p["fixed_ips"]
if netutils.is_valid_ipv4(fxip["ip_address"]) and
p['status'] in p_status]
inactive = [p for p in ports if p['status'] != 'ACTIVE']
if inactive:
LOG.warning("Instance has ports that are not ACTIVE: %s", inactive)
self.assertNotEqual(0, len(port_map),
"No IPv4 addresses found in: %s" % ports)
self.assertEqual(len(port_map), 1,
"Found multiple IPv4 addresses: %s. "
"Unable to determine which port to target."
% port_map)
return port_map[0]
def _get_network_by_name(self, network_name):
net = self.os_admin.networks_client.list_networks(
name=network_name)['networks']
self.assertNotEqual(len(net), 0,
"Unable to get network by name: %s" % network_name)
return net[0]
def create_floating_ip(self, thing, external_network_id=None,
port_id=None, client=None):
"""Create a floating IP and associates to a resource/port on Neutron"""
if not external_network_id:
external_network_id = CONF.network.public_network_id
if not client:
client = self.floating_ips_client
if not port_id:
port_id, ip4 = self._get_server_port_id_and_ip4(thing)
else:
ip4 = None
result = client.create_floatingip(
floating_network_id=external_network_id,
port_id=port_id,
tenant_id=thing['tenant_id'],
fixed_ip_address=ip4
)
floating_ip = result['floatingip']
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
client.delete_floatingip,
floating_ip['id'])
return floating_ip
def _associate_floating_ip(self, floating_ip, server):
port_id, _ = self._get_server_port_id_and_ip4(server)
kwargs = dict(port_id=port_id)
floating_ip = self.floating_ips_client.update_floatingip(
floating_ip['id'], **kwargs)['floatingip']
self.assertEqual(port_id, floating_ip['port_id'])
return floating_ip
def _disassociate_floating_ip(self, floating_ip):
""":param floating_ip: floating_ips_client.create_floatingip"""
kwargs = dict(port_id=None)
floating_ip = self.floating_ips_client.update_floatingip(
floating_ip['id'], **kwargs)['floatingip']
self.assertIsNone(floating_ip['port_id'])
return floating_ip
def check_floating_ip_status(self, floating_ip, status):
"""Verifies floatingip reaches the given status
:param dict floating_ip: floating IP dict to check status
:param status: target status
:raises AssertionError: if status doesn't match
"""
floatingip_id = floating_ip['id']
def refresh():
result = (self.floating_ips_client.
show_floatingip(floatingip_id)['floatingip'])
return status == result['status']
test_utils.call_until_true(refresh,
CONF.network.build_timeout,
CONF.network.build_interval)
floating_ip = self.floating_ips_client.show_floatingip(
floatingip_id)['floatingip']
self.assertEqual(status, floating_ip['status'],
message="FloatingIP: {fp} is at status: {cst}. "
"failed to reach status: {st}"
.format(fp=floating_ip, cst=floating_ip['status'],
st=status))
LOG.info('FloatingIP: %(fp)s is at status: %(st)s',
{'fp': floating_ip, 'st': status})
def _check_tenant_network_connectivity(self, server,
username,
private_key,
should_connect=True,
servers_for_debug=None):
if not CONF.network.project_networks_reachable:
msg = 'Tenant networks not configured to be reachable.'
LOG.info(msg)
return
# The target login is assumed to have been configured for
# key-based authentication by cloud-init.
try:
for net_name, ip_addresses in server['addresses'].items():
for ip_address in ip_addresses:
self.check_vm_connectivity(ip_address['addr'],
username,
private_key,
should_connect=should_connect)
except Exception as e:
LOG.exception('Tenant network connectivity check failed')
self._log_console_output(servers_for_debug)
self._log_net_info(e)
raise
def _check_remote_connectivity(self, source, dest, should_succeed=True,
nic=None):
"""check ping server via source ssh connection
:param source: RemoteClient: an ssh connection from which to ping
:param dest: and IP to ping against
:param should_succeed: boolean should ping succeed or not
:param nic: specific network interface to ping from
:returns: boolean -- should_succeed == ping
:returns: ping is false if ping failed
"""
def ping_remote():
try:
source.ping_host(dest, nic=nic)
except lib_exc.SSHExecCommandFailed:
LOG.warning('Failed to ping IP: %s via a ssh connection '
'from: %s.', dest, source.ssh_client.host)
return not should_succeed
return should_succeed
return test_utils.call_until_true(ping_remote,
CONF.validation.ping_timeout,
1)
def _create_security_group(self, security_group_rules_client=None,
tenant_id=None,
namestart='secgroup-smoke',
security_groups_client=None):
if security_group_rules_client is None:
security_group_rules_client = self.security_group_rules_client
if security_groups_client is None:
security_groups_client = self.security_groups_client
if tenant_id is None:
tenant_id = security_groups_client.tenant_id
secgroup = self._create_empty_security_group(
namestart=namestart, client=security_groups_client,
tenant_id=tenant_id)
# Add rules to the security group
rules = self._create_loginable_secgroup_rule(
security_group_rules_client=security_group_rules_client,
secgroup=secgroup,
security_groups_client=security_groups_client)
for rule in rules:
self.assertEqual(tenant_id, rule['tenant_id'])
self.assertEqual(secgroup['id'], rule['security_group_id'])
return secgroup
def _create_empty_security_group(self, client=None, tenant_id=None,
namestart='secgroup-smoke'):
"""Create a security group without rules.
Default rules will be created:
- IPv4 egress to any
- IPv6 egress to any
:param tenant_id: secgroup will be created in this tenant
:returns: the created security group
"""
if client is None:
client = self.security_groups_client
if not tenant_id:
tenant_id = client.tenant_id
sg_name = data_utils.rand_name(namestart)
sg_desc = sg_name + " description"
sg_dict = dict(name=sg_name,
description=sg_desc)
sg_dict['tenant_id'] = tenant_id
result = client.create_security_group(**sg_dict)
secgroup = result['security_group']
self.assertEqual(secgroup['name'], sg_name)
self.assertEqual(tenant_id, secgroup['tenant_id'])
self.assertEqual(secgroup['description'], sg_desc)
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
client.delete_security_group, secgroup['id'])
return secgroup
def _default_security_group(self, client=None, tenant_id=None):
"""Get default secgroup for given tenant_id.
:returns: default secgroup for given tenant
"""
if client is None:
client = self.security_groups_client
if not tenant_id:
tenant_id = client.tenant_id
sgs = [
sg for sg in list(client.list_security_groups().values())[0]
if sg['tenant_id'] == tenant_id and sg['name'] == 'default'
]
msg = "No default security group for tenant %s." % (tenant_id)
self.assertGreater(len(sgs), 0, msg)
return sgs[0]
def _create_security_group_rule(self, secgroup=None,
sec_group_rules_client=None,
tenant_id=None,
security_groups_client=None, **kwargs):
"""Create a rule from a dictionary of rule parameters.
Create a rule in a secgroup. if secgroup not defined will search for
default secgroup in tenant_id.
:param secgroup: the security group.
:param tenant_id: if secgroup not passed -- the tenant in which to
search for default secgroup
:param kwargs: a dictionary containing rule parameters:
for example, to allow incoming ssh:
rule = {
direction: 'ingress'
protocol:'tcp',
port_range_min: 22,
port_range_max: 22
}
"""
if sec_group_rules_client is None:
sec_group_rules_client = self.security_group_rules_client
if security_groups_client is None:
security_groups_client = self.security_groups_client
if not tenant_id:
tenant_id = security_groups_client.tenant_id
if secgroup is None:
secgroup = self._default_security_group(
client=security_groups_client, tenant_id=tenant_id)
ruleset = dict(security_group_id=secgroup['id'],
tenant_id=secgroup['tenant_id'])
ruleset.update(kwargs)
sg_rule = sec_group_rules_client.create_security_group_rule(**ruleset)
sg_rule = sg_rule['security_group_rule']
self.assertEqual(secgroup['tenant_id'], sg_rule['tenant_id'])
self.assertEqual(secgroup['id'], sg_rule['security_group_id'])
return sg_rule
def _create_loginable_secgroup_rule(self, security_group_rules_client=None,
secgroup=None,
security_groups_client=None):
"""Create loginable security group rule
This function will create:
1. egress and ingress tcp port 22 allow rule in order to allow ssh
access for ipv4.
2. egress and ingress ipv6 icmp allow rule, in order to allow icmpv6.
3. egress and ingress ipv4 icmp allow rule, in order to allow icmpv4.
"""
if security_group_rules_client is None:
security_group_rules_client = self.security_group_rules_client
if security_groups_client is None:
security_groups_client = self.security_groups_client
rules = []
rulesets = [
dict(
# ssh
protocol='tcp',
port_range_min=22,
port_range_max=22,
),
dict(
# ping
protocol='icmp',
),
dict(
# ipv6-icmp for ping6
protocol='icmp',
ethertype='IPv6',
)
]
sec_group_rules_client = security_group_rules_client
for ruleset in rulesets:
for r_direction in ['ingress', 'egress']:
ruleset['direction'] = r_direction
try:
sg_rule = self._create_security_group_rule(
sec_group_rules_client=sec_group_rules_client,
secgroup=secgroup,
security_groups_client=security_groups_client,
**ruleset)
except lib_exc.Conflict as ex:
# if rule already exist - skip rule and continue
msg = 'Security group rule already exists'
if msg not in ex._error_string:
raise ex
else:
self.assertEqual(r_direction, sg_rule['direction'])
rules.append(sg_rule)
return rules
def _get_router(self, client=None, tenant_id=None):
"""Retrieve a router for the given tenant id.
If a public router has been configured, it will be returned.
If a public router has not been configured, but a public
network has, a tenant router will be created and returned that
routes traffic to the public network.
"""
if not client:
client = self.routers_client
if not tenant_id:
tenant_id = client.tenant_id
router_id = CONF.network.public_router_id
network_id = CONF.network.public_network_id
if router_id:
body = client.show_router(router_id)
return body['router']
elif network_id:
router = self._create_router(client, tenant_id)
kwargs = {'external_gateway_info': dict(network_id=network_id)}
router = client.update_router(router['id'], **kwargs)['router']
return router
else:
raise Exception("Neither of 'public_router_id' or "
"'public_network_id' has been defined.")
def _create_router(self, client=None, tenant_id=None,
namestart='router-smoke'):
if not client:
client = self.routers_client
if not tenant_id:
tenant_id = client.tenant_id
name = data_utils.rand_name(namestart)
result = client.create_router(name=name,
admin_state_up=True,
tenant_id=tenant_id)
router = result['router']
self.assertEqual(router['name'], name)
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
client.delete_router,
router['id'])
return router
def _update_router_admin_state(self, router, admin_state_up):
kwargs = dict(admin_state_up=admin_state_up)
router = self.routers_client.update_router(
router['id'], **kwargs)['router']
self.assertEqual(admin_state_up, router['admin_state_up'])
def create_networks(self, networks_client=None,
routers_client=None, subnets_client=None,
tenant_id=None, dns_nameservers=None,
port_security_enabled=True):
"""Create a network with a subnet connected to a router.
The baremetal driver is a special case since all nodes are
on the same shared network.
:param tenant_id: id of tenant to create resources in.
:param dns_nameservers: list of dns servers to send to subnet.
:returns: network, subnet, router
"""
if CONF.network.shared_physical_network:
# NOTE(Shrews): This exception is for environments where tenant
# credential isolation is available, but network separation is
# not (the current baremetal case). Likely can be removed when
# test account mgmt is reworked:
# https://blueprints.launchpad.net/tempest/+spec/test-accounts
if not CONF.compute.fixed_network_name:
m = 'fixed_network_name must be specified in config'
raise lib_exc.InvalidConfiguration(m)
network = self._get_network_by_name(
CONF.compute.fixed_network_name)
router = None
subnet = None
else:
network = self._create_network(
networks_client=networks_client,
tenant_id=tenant_id,
port_security_enabled=port_security_enabled)
router = self._get_router(client=routers_client,
tenant_id=tenant_id)
subnet_kwargs = dict(network=network,
subnets_client=subnets_client,
routers_client=routers_client)
# use explicit check because empty list is a valid option
if dns_nameservers is not None:
subnet_kwargs['dns_nameservers'] = dns_nameservers
subnet = self._create_subnet(**subnet_kwargs)
if not routers_client:
routers_client = self.routers_client
router_id = router['id']
routers_client.add_router_interface(router_id,
subnet_id=subnet['id'])
# save a cleanup job to remove this association between
# router and subnet
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
routers_client.remove_router_interface, router_id,
subnet_id=subnet['id'])
return network, subnet, router

View File

@ -1,50 +0,0 @@
# Copyright 2016 Rackspace Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
"""
For running Octavia tests, it is assumed that the following option is
defined in the [service_available] section of etc/tempest.conf
octavia = True
"""
service_option = cfg.BoolOpt('octavia',
default=False,
help="Whether or not Octavia is expected to be "
"available")
octavia_group = cfg.OptGroup(name='octavia', title='Octavia Service')
OctaviaGroup = [
cfg.StrOpt('catalog_type',
default='network',
help='Catalog type of the Octavia service.'),
cfg.IntOpt('build_interval',
default=5,
help='Time in seconds between build status checks for '
'non-load-balancer resources to build'),
cfg.IntOpt('build_timeout',
default=30,
help='Timeout in seconds to wait for non-load-balancer '
'resources to build'),
cfg.IntOpt('lb_build_interval',
default=15,
help='Time in seconds between build status checks for a '
'load balancer.'),
cfg.IntOpt('lb_build_timeout',
default=900,
help='Timeout in seconds to wait for a '
'load balancer to build.'),
]

View File

@ -1,42 +0,0 @@
# Copyright 2016 Hewlett Packard Enterprise Development Company
# Copyright 2016 Rackspace Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from tempest.test_discover import plugins
import octavia
from octavia.tests.tempest import config as octavia_config
class OctaviaTempestPlugin(plugins.TempestPlugin):
def load_tests(self):
base_path = os.path.split(os.path.dirname(
os.path.abspath(octavia.__file__)))[0]
test_dir = "octavia/tests/tempest"
full_test_dir = os.path.join(base_path, test_dir)
return full_test_dir, base_path
def register_opts(self, conf):
conf.register_group(octavia_config.octavia_group)
conf.register_opts(octavia_config.OctaviaGroup, group='octavia')
conf.register_opt(octavia_config.service_option,
group='service_available')
def get_opt_lists(self):
return [('octavia', octavia_config.OctaviaGroup),
('service_available', [octavia_config.service_option])]

View File

@ -1,11 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,11 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,60 +0,0 @@
# Copyright 2016 Rackspace US Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_serialization import jsonutils
from six.moves.urllib import parse
from tempest.lib.common import rest_client
class HealthMonitorsClient(rest_client.RestClient):
"""Tests Health Monitors API."""
_HEALTH_MONITORS_URL = ("v1/loadbalancers/{lb_id}/"
"pools/{pool_id}/healthmonitor")
def get_health_monitor(self, lb_id, pool_id, params=None):
"""Get health monitor details."""
url = self._HEALTH_MONITORS_URL.format(lb_id=lb_id, pool_id=pool_id)
if params:
url = "{0}?{1}".format(url, parse.urlencode(params))
resp, body = self.get(url)
body = jsonutils.loads(body)
self.expected_success(200, resp.status)
return rest_client.ResponseBodyList(resp, body)
def create_health_monitor(self, lb_id, pool_id, **kwargs):
"""Create a health monitor."""
url = self._HEALTH_MONITORS_URL.format(lb_id=lb_id, pool_id=pool_id)
post_body = jsonutils.dumps(kwargs)
resp, body = self.post(url, post_body)
body = jsonutils.loads(body)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
def update_health_monitor(self, lb_id, pool_id, **kwargs):
"""Update a health monitor."""
url = self._HEALTH_MONITORS_URL.format(lb_id=lb_id, pool_id=pool_id)
put_body = jsonutils.dumps(kwargs)
resp, body = self.put(url, put_body)
body = jsonutils.loads(body)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
def delete_health_monitor(self, lb_id, pool_id):
"""Delete an existing health monitor."""
url = self._HEALTH_MONITORS_URL.format(lb_id=lb_id, pool_id=pool_id)
resp, body = self.delete(url)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)

View File

@ -1,82 +0,0 @@
# Copyright 2016 Rackspace US Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_serialization import jsonutils
from six.moves.urllib import parse
from tempest.lib.common import rest_client
class ListenersClient(rest_client.RestClient):
"""Tests Listeners API."""
_LISTENERS_URL = "v1/loadbalancers/{lb_id}/listeners"
_LISTENER_URL = "{base_url}/{{listener_id}}".format(
base_url=_LISTENERS_URL)
_LISTENER_STATS_URL = "{base_url}/stats".format(base_url=_LISTENER_URL)
def list_listeners(self, lb_id, params=None):
"""List all listeners."""
url = self._LISTENERS_URL.format(lb_id=lb_id)
if params:
url = '{0}?{1}'.format(url, parse.urlencode(params))
resp, body = self.get(url)
body = jsonutils.loads(body)
self.expected_success(200, resp.status)
return rest_client.ResponseBodyList(resp, body)
def get_listener(self, lb_id, listener_id, params=None):
"""Get listener details."""
url = self._LISTENER_URL.format(lb_id=lb_id, listener_id=listener_id)
if params:
url = '{0}?{1}'.format(url, parse.urlencode(params))
resp, body = self.get(url)
body = jsonutils.loads(body)
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
def create_listener(self, lb_id, **kwargs):
"""Create a listener build."""
url = self._LISTENERS_URL.format(lb_id=lb_id)
post_body = jsonutils.dumps(kwargs)
resp, body = self.post(url, post_body)
body = jsonutils.loads(body)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
def update_listener(self, lb_id, listener_id, **kwargs):
"""Update an listener build."""
url = self._LISTENER_URL.format(lb_id=lb_id, listener_id=listener_id)
put_body = jsonutils.dumps(kwargs)
resp, body = self.put(url, put_body)
body = jsonutils.loads(body)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
def delete_listener(self, lb_id, listener_id):
"""Delete an existing listener build."""
url = self._LISTENER_URL.format(lb_id=lb_id, listener_id=listener_id)
resp, body = self.delete(url)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
def get_listener_stats(self, lb_id, listener_id, params=None):
"""Get listener statistics."""
url = self._LISTENER_STATS_URL.format(lb_id=lb_id,
listener_id=listener_id)
if params:
url = '{0}?{1}'.format(url, parse.urlencode(params))
resp, body = self.get(url)
body = jsonutils.loads(body)
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)

View File

@ -1,101 +0,0 @@
# Copyright 2016 Rackspace US Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_serialization import jsonutils
from six.moves.urllib import parse
from tempest.lib.common import rest_client
from tempest.lib import exceptions as tempest_exceptions
class LoadBalancersClient(rest_client.RestClient):
"""Tests Load Balancers API."""
_LOAD_BALANCERS_URL = "v1/loadbalancers"
_LOAD_BALANCER_URL = "{base_url}/{{lb_id}}".format(
base_url=_LOAD_BALANCERS_URL)
_LOAD_BALANCER_CASCADE_DELETE_URL = "{lb_url}/delete_cascade".format(
lb_url=_LOAD_BALANCER_URL)
def list_load_balancers(self, params=None):
"""List all load balancers."""
url = self._LOAD_BALANCERS_URL
if params:
url = '{0}?{1}'.format(url, parse.urlencode(params))
resp, body = self.get(url)
body = jsonutils.loads(body)
self.expected_success(200, resp.status)
return rest_client.ResponseBodyList(resp, body)
def get_load_balancer(self, lb_id, params=None):
"""Get load balancer details."""
url = self._LOAD_BALANCER_URL.format(lb_id=lb_id)
if params:
url = '{0}?{1}'.format(url, parse.urlencode(params))
resp, body = self.get(url)
body = jsonutils.loads(body)
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
def create_load_balancer(self, **kwargs):
"""Create a load balancer build."""
url = self._LOAD_BALANCERS_URL
post_body = jsonutils.dumps(kwargs)
resp, body = self.post(url, post_body)
body = jsonutils.loads(body)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
def create_load_balancer_graph(self, lbgraph):
"""Create a load balancer graph build."""
url = self._LOAD_BALANCERS_URL
post_body = jsonutils.dumps(lbgraph)
resp, body = self.post(url, post_body)
body = jsonutils.loads(body)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
def update_load_balancer(self, lb_id, **kwargs):
"""Update a load balancer build."""
url = self._LOAD_BALANCER_URL.format(lb_id=lb_id)
put_body = jsonutils.dumps(kwargs)
resp, body = self.put(url, put_body)
body = jsonutils.loads(body)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
def delete_load_balancer(self, lb_id):
"""Delete an existing load balancer build."""
url = self._LOAD_BALANCER_URL.format(lb_id=lb_id)
resp, body = self.delete(url)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
def delete_load_balancer_cascade(self, lb_id):
"""Delete an existing load balancer (cascading)."""
url = self._LOAD_BALANCER_CASCADE_DELETE_URL.format(lb_id=lb_id)
resp, body = self.delete(url)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
def create_load_balancer_over_quota(self, **kwargs):
"""Attempt to build a load balancer over quota."""
url = self._LOAD_BALANCERS_URL
post_body = jsonutils.dumps(kwargs)
try:
resp, body = self.post(url, post_body)
except tempest_exceptions.Forbidden:
# This is what we expect to happen
return
assert resp.status == 403, "Expected over quota 403 response"
return rest_client.ResponseBody(resp, body)

View File

@ -1,75 +0,0 @@
# Copyright 2016 Rackspace US Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_serialization import jsonutils
from six.moves.urllib import parse
from tempest.lib.common import rest_client
class MembersClient(rest_client.RestClient):
"""Tests Members API."""
_MEMBERS_URL = ("v1/loadbalancers/{lb_id}/pools/{pool_id}/members")
_MEMBER_URL = "{base_url}/{{member_id}}".format(base_url=_MEMBERS_URL)
def list_members(self, lb_id, pool_id, params=None):
"""List all Members."""
url = self._MEMBERS_URL.format(lb_id=lb_id, pool_id=pool_id)
if params:
url = "{0}?{1}".format(url, parse.urlencode(params))
resp, body = self.get(url)
body = jsonutils.loads(body)
self.expected_success(200, resp.status)
return rest_client.ResponseBodyList(resp, body)
def get_member(self, lb_id, pool_id, member_id, params=None):
"""Get member details."""
url = self._MEMBER_URL.format(lb_id=lb_id,
pool_id=pool_id,
member_id=member_id)
if params:
url = '{0}?{1}'.format(url, parse.urlencode(params))
resp, body = self.get(url)
body = jsonutils.loads(body)
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
def create_member(self, lb_id, pool_id, **kwargs):
"""Create member."""
url = self._MEMBERS_URL.format(lb_id=lb_id,
pool_id=pool_id)
post_body = jsonutils.dumps(kwargs)
resp, body = self.post(url, post_body)
body = jsonutils.loads(body)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
def update_member(self, lb_id, pool_id, member_id, **kwargs):
"""Update member."""
url = self._MEMBER_URL.format(lb_id=lb_id,
pool_id=pool_id,
member_id=member_id)
put_body = jsonutils.dumps(kwargs)
resp, body = self.put(url, put_body)
body = jsonutils.loads(body)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
def delete_member(self, lb_id, pool_id, member_id):
"""Delete member."""
url = self._MEMBER_URL.format(lb_id=lb_id,
pool_id=pool_id,
member_id=member_id)
resp, body = self.delete(url)
self.expected_success(202, resp.status)

View File

@ -1,70 +0,0 @@
# Copyright 2016 Rackspace US Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_serialization import jsonutils
from six.moves.urllib import parse
from tempest.lib.common import rest_client
class PoolsClient(rest_client.RestClient):
"""Test Pools API."""
_POOLS_URL = "v1/loadbalancers/{lb_id}/pools"
_POOL_URL = "{base_url}/{{pool_id}}".format(base_url=_POOLS_URL)
def list_pools(self, lb_id, listener_id=None, params=None):
"""List all pools. Filter by listener id if provided."""
url = self._POOLS_URL.format(lb_id=lb_id)
params['listener_id'] = str(listener_id)
if params:
url = '{0}?{1}'.format(url, parse.urlencode(params))
resp, body = self.get(url)
body = jsonutils.loads(body)
self.expected_success(200, resp.status)
return rest_client.ResponseBodyList(resp, body)
def get_pool(self, lb_id, pool_id, params=None):
"""Get pool details."""
url = self._POOL_URL.format(lb_id=lb_id, pool_id=pool_id)
if params:
url = '{0}?{1}'.format(url, parse.urlencode(params))
resp, body = self.get(url)
body = jsonutils.loads(body)
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
def create_pool(self, lb_id, **kwargs):
"""Create a pool."""
url = self._POOLS_URL.format(lb_id=lb_id)
post_body = jsonutils.dumps(kwargs)
resp, body = self.post(url, post_body)
body = jsonutils.loads(body)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
def update_pool(self, lb_id, pool_id, **kwargs):
"""Update a pool."""
url = self._POOL_URL.format(lb_id=lb_id, pool_id=pool_id)
post_body = jsonutils.dumps(kwargs)
resp, body = self.put(url, post_body)
body = jsonutils.loads(body)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
def delete_pool(self, lb_id, pool_id):
"""Delete a pool."""
url = self._POOL_URL.format(lb_id=lb_id, pool_id=pool_id)
resp, body = self.delete(url)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)

View File

@ -1,59 +0,0 @@
# Copyright 2016 Rackspace US Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_serialization import jsonutils
from six.moves.urllib import parse
from tempest.lib.common import rest_client
class QuotasClient(rest_client.RestClient):
"""Tests Quotas API."""
_QUOTAS_URL = "v1/{project_id}/quotas"
def list_quotas(self, params=None):
"""List all non-default quotas."""
url = "v1/quotas"
if params:
url = '{0}?{1}'.format(url, parse.urlencode(params))
resp, body = self.get(url)
body = jsonutils.loads(body)
self.expected_success(200, resp.status)
return rest_client.ResponseBodyList(resp, body)
def get_quotas(self, project_id, params=None):
"""Get Quotas for a project."""
url = self._QUOTAS_URL.format(project_id=project_id)
if params:
url = '{0}?{1}'.format(url, parse.urlencode(params))
resp, body = self.get(url)
body = jsonutils.loads(body)
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
def update_quotas(self, project_id, **kwargs):
"""Update a Quotas for a project."""
url = self._QUOTAS_URL.format(project_id=project_id)
put_body = jsonutils.dumps(kwargs)
resp, body = self.put(url, put_body)
body = jsonutils.loads(body)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
def delete_quotas(self, project_id):
"""Delete an Quotas for a project."""
url = self._QUOTAS_URL.format(project_id=project_id)
resp, body = self.delete(url)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)

View File

@ -1,11 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,901 +0,0 @@
# Copyright 2016 Rackspace Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
try:
from http import cookiejar as cookielib
except ImportError:
import cookielib
import os
import shlex
import shutil
import socket
import subprocess
import tempfile
import threading
import time
from oslo_log import log as logging
import six
from six.moves.urllib import error
from six.moves.urllib import request as urllib2
from tempest import clients
from tempest.common import credentials_factory
from tempest.common import utils
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import test_utils
from tempest.lib import exceptions as lib_exc
from octavia.i18n import _
from octavia.tests.tempest.common import manager
from octavia.tests.tempest.v1.clients import health_monitors_client
from octavia.tests.tempest.v1.clients import listeners_client
from octavia.tests.tempest.v1.clients import load_balancers_client
from octavia.tests.tempest.v1.clients import members_client
from octavia.tests.tempest.v1.clients import pools_client
from octavia.tests.tempest.v1.clients import quotas_client
config = config.CONF
LOG = logging.getLogger(__name__)
HTTPD_SRC = os.path.abspath(
os.path.join(os.path.dirname(__file__),
'../../../contrib/httpd.go'))
class BaseTestCase(manager.NetworkScenarioTest):
def setUp(self):
super(BaseTestCase, self).setUp()
self.servers_keypairs = {}
self.servers = {}
self.members = []
self.floating_ips = {}
self.servers_floating_ips = {}
self.server_ips = {}
self.start_port = 80
self.num = 50
self.server_fixed_ips = {}
mgr = self.get_client_manager()
auth_provider = mgr.auth_provider
region = config.network.region or config.identity.region
self.client_args = [auth_provider, 'load-balancer', region]
self.load_balancers_client = (
load_balancers_client.LoadBalancersClient(*self.client_args))
self.listeners_client = (
listeners_client.ListenersClient(*self.client_args))
self.pools_client = pools_client.PoolsClient(*self.client_args)
self.members_client = members_client.MembersClient(
*self.client_args)
self.health_monitors_client = (
health_monitors_client.HealthMonitorsClient(
*self.client_args))
self.quotas_client = quotas_client.QuotasClient(*self.client_args)
self._create_security_group_for_test()
self._set_net_and_subnet()
# admin network client needed for assigning octavia port to flip
os_admin = clients.Manager(
credentials_factory.get_configured_admin_credentials())
os_admin.auth_provider.fill_credentials()
self.floating_ips_client_admin = os_admin.floating_ips_client
self.ports_client_admin = os_admin.ports_client
@classmethod
def skip_checks(cls):
super(BaseTestCase, cls).skip_checks()
cfg = config.network
if not utils.is_extension_enabled('lbaasv2', 'network'):
msg = 'LBaaS Extension is not enabled'
raise cls.skipException(msg)
if not (cfg.project_networks_reachable or cfg.public_network_id):
msg = ('Either project_networks_reachable must be "true", or '
'public_network_id must be defined.')
raise cls.skipException(msg)
def _set_net_and_subnet(self):
"""Set Network and Subnet
Query and set appropriate network and subnet attributes to be used
for the test. Existing tenant networks are used if they are found.
The configured private network and associated subnet is used as a
fallback in absence of tenant networking.
"""
tenant_id = self.load_balancers_client.tenant_id
try:
tenant_net = self.os_admin.networks_client.list_networks(
tenant_id=tenant_id)['networks'][0]
except IndexError:
tenant_net = None
if tenant_net:
tenant_subnet = self.os_admin.subnets_client.list_subnets(
tenant_id=tenant_id)['subnets'][0]
self.subnet = tenant_subnet
self.network = tenant_net
else:
self.network = self._get_network_by_name(
config.compute.fixed_network_name)
# We are assuming that the first subnet associated
# with the fixed network is the one we want. In the future, we
# should instead pull a subnet id from config, which is set by
# devstack/admin/etc.
subnet = self.os_admin.subnets_client.list_subnets(
network_id=self.network['id'])['subnets'][0]
self.subnet = subnet
def _create_security_group_for_test(self):
self.security_group = self._create_security_group(
tenant_id=self.load_balancers_client.tenant_id)
self._create_security_group_rules_for_port(self.start_port)
self._create_security_group_rules_for_port(self.start_port + 1)
def _create_security_group_rules_for_port(self, port):
rule = {
'direction': 'ingress',
'protocol': 'tcp',
'port_range_min': port,
'port_range_max': port,
}
self._create_security_group_rule(
secgroup=self.security_group,
tenant_id=self.load_balancers_client.tenant_id,
**rule)
def _ipv6_subnet(self, address6_mode):
tenant_id = self.load_balancers_client.tenant_id
router = self._get_router(tenant_id=tenant_id)
self.network = self._create_network(tenant_id=tenant_id)
self.subnet = self._create_subnet(network=self.network,
namestart='sub6',
ip_version=6,
ipv6_ra_mode=address6_mode,
ipv6_address_mode=address6_mode)
self.subnet.add_to_router(router_id=router['id'])
self.addCleanup(self.subnet.delete)
def _create_server(self, name):
keypair = self.create_keypair()
security_groups = [{'name': self.security_group['name']}]
create_kwargs = {
'networks': [
{'uuid': self.network['id']},
],
'key_name': keypair['name'],
'security_groups': security_groups,
}
net_name = self.network['name']
server = self.create_server(name=name, **create_kwargs)
waiters.wait_for_server_status(self.servers_client,
server['id'], 'ACTIVE')
server = self.servers_client.show_server(server['id'])
server = server['server']
self.servers_keypairs[server['id']] = keypair
LOG.info('servers_keypairs looks like this(format): %s',
self.servers_keypairs)
if (config.network.public_network_id and not
config.network.project_networks_reachable):
public_network_id = config.network.public_network_id
floating_ip = self._create_floating_ip(
server, public_network_id)
self.floating_ips[floating_ip['id']] = server
self.server_ips[server['id']] = floating_ip['floating_ip_address']
else:
self.server_ips[server['id']] = (
server['addresses'][net_name][0]['addr'])
self.server_fixed_ips[server['id']] = (
server['addresses'][net_name][0]['addr'])
self.assertTrue(self.servers_keypairs)
self.servers[name] = server['id']
return server
def _create_servers(self, num=2):
for count in range(num):
name = "server%s" % (count + 1)
self.server = self._create_server(name=name)
self.assertEqual(len(self.servers_keypairs), num)
def _stop_server(self, name):
for sname, value in six.iteritems(self.servers):
if sname == name:
LOG.info('STOPPING SERVER: %s', sname)
self.servers_client.stop_server(value)
waiters.wait_for_server_status(self.servers_client,
value, 'SHUTOFF')
LOG.info('STOPPING SERVER COMPLETED!')
def _start_server(self, name):
for sname, value in six.iteritems(self.servers):
if sname == name:
self.servers_client.start(value)
waiters.wait_for_server_status(self.servers_client,
value, 'ACTIVE')
def _build_static_httpd(self):
"""Compile test httpd as a static binary
returns file path of resulting binary file
"""
builddir = tempfile.mkdtemp()
shutil.copyfile(HTTPD_SRC, os.path.join(builddir, 'httpd.go'))
self.execute('go build -ldflags '
'"-linkmode external -extldflags -static" '
'httpd.go', cwd=builddir)
return os.path.join(builddir, 'httpd')
def _start_backend_httpd_processes(self, backend, ports=None):
"""Start one or more webservers on a given backend server
1. SSH to the backend
2. Start http backends listening on the given ports
"""
ports = ports or [80, 81]
httpd = self._build_static_httpd()
backend_id = self.servers[backend]
for server_id, ip in six.iteritems(self.server_ips):
if server_id != backend_id:
continue
private_key = self.servers_keypairs[server_id]['private_key']
username = config.validation.image_ssh_user
ssh_client = self.get_remote_client(
ip_address=ip,
private_key=private_key)
with tempfile.NamedTemporaryFile() as key:
key.write(private_key.encode('utf-8'))
key.flush()
self.copy_file_to_host(httpd,
"/dev/shm/httpd",
ip,
username, key.name)
# Start httpd
start_server = ('sudo sh -c "ulimit -n 100000; screen -d -m '
'/dev/shm/httpd -id %(id)s -port %(port)s"')
for i in range(len(ports)):
cmd = start_server % {'id': backend + "_" + str(i),
'port': ports[i]}
ssh_client.exec_command(cmd)
# Allow ssh_client connection to fall out of scope
def _create_listener(self, load_balancer_id, default_pool_id=None):
"""Create a listener with HTTP protocol listening on port 80."""
self.create_listener_kwargs = {'protocol': 'HTTP',
'protocol_port': 80,
'default_pool_id': default_pool_id}
self.listener = self.listeners_client.create_listener(
lb_id=load_balancer_id,
**self.create_listener_kwargs)
self.assertTrue(self.listener)
self.addCleanup(self._cleanup_listener, self.listener['id'],
load_balancer_id)
LOG.info('Waiting for lb status on create listener id: %s',
self.listener['id'])
self._wait_for_load_balancer_status(load_balancer_id)
return self.listener
def _create_health_monitor(self):
"""Create a HTTP health monitor."""
self.hm = self.health_monitors_client.create_health_monitor(
type='HTTP', delay=3, timeout=5,
fall_threshold=5, rise_threshold=5,
lb_id=self.load_balancer['id'],
pool_id=self.pool['id'])
self.assertTrue(self.hm)
self.addCleanup(self._cleanup_health_monitor,
pool_id=self.pool['id'],
load_balancer_id=self.load_balancer['id'])
self._wait_for_load_balancer_status(self.load_balancer['id'])
# add clean up members prior to clean up of health monitor
# see bug 1547609
members = self.members_client.list_members(self.load_balancer['id'],
self.pool['id'])
self.assertTrue(members)
for member in members:
self.addCleanup(self._cleanup_member,
load_balancer_id=self.load_balancer['id'],
pool_id=self.pool['id'],
member_id=member['id'])
def _create_pool(self, load_balancer_id,
persistence_type=None, cookie_name=None):
"""Create a pool with ROUND_ROBIN algorithm."""
create_pool_kwargs = {
'lb_algorithm': 'ROUND_ROBIN',
'protocol': 'HTTP'
}
if persistence_type:
create_pool_kwargs.update(
{'session_persistence': {'type': persistence_type}})
if cookie_name:
create_pool_kwargs.update(
{'session_persistence': {'cookie_name': cookie_name}})
self.pool = self.pools_client.create_pool(lb_id=load_balancer_id,
**create_pool_kwargs)
self.assertTrue(self.pool)
self.addCleanup(self._cleanup_pool, self.pool['id'], load_balancer_id)
LOG.info('Waiting for lb status on create pool id: %s',
self.pool['id'])
self._wait_for_load_balancer_status(load_balancer_id)
return self.pool
def _cleanup_load_balancer(self, load_balancer_id):
test_utils.call_and_ignore_notfound_exc(
self.load_balancers_client.delete_load_balancer, load_balancer_id)
self._wait_for_load_balancer_status(load_balancer_id, delete=True)
def _delete_load_balancer_cascade(self, load_balancer_id):
self.load_balancers_client.delete_load_balancer_cascade(
load_balancer_id)
self._wait_for_load_balancer_status(load_balancer_id, delete=True)
def _cleanup_listener(self, listener_id, load_balancer_id=None):
test_utils.call_and_ignore_notfound_exc(
self.listeners_client.delete_listener, load_balancer_id,
listener_id)
if load_balancer_id:
self._wait_for_load_balancer_status(load_balancer_id, delete=True)
def _cleanup_pool(self, pool_id, load_balancer_id=None):
test_utils.call_and_ignore_notfound_exc(
self.pools_client.delete_pool, load_balancer_id, pool_id)
if load_balancer_id:
self._wait_for_load_balancer_status(load_balancer_id, delete=True)
def _cleanup_health_monitor(self, hm_id, load_balancer_id=None):
test_utils.call_and_ignore_notfound_exc(
self.health_monitors_client.delete_health_monitor, hm_id)
if load_balancer_id:
self._wait_for_load_balancer_status(load_balancer_id, delete=True)
def _create_members(self, load_balancer_id, pool_id, backend, ports=None,
subnet_id=None):
"""Create one or more Members based on the given backend
:backend: The backend server the members will be on
:param ports: List of listening ports on the backend server
"""
ports = ports or [80, 81]
backend_id = self.servers[backend]
for server_id, ip in six.iteritems(self.server_fixed_ips):
if server_id != backend_id:
continue
for port in ports:
create_member_kwargs = {
'ip_address': ip,
'protocol_port': port,
'weight': 50,
'subnet_id': subnet_id
}
member = self.members_client.create_member(
lb_id=load_balancer_id,
pool_id=pool_id,
**create_member_kwargs)
LOG.info('Waiting for lb status on create member...')
self._wait_for_load_balancer_status(load_balancer_id)
self.members.append(member)
self.assertTrue(self.members)
def _assign_floating_ip_to_lb_vip(self, lb):
public_network_id = config.network.public_network_id
LOG.info('assign_floating_ip_to_lb_vip lb: %s type: %s', lb, type(lb))
port_id = lb['vip']['port_id']
floating_ip = self._create_floating_ip(
thing=lb,
external_network_id=public_network_id,
port_id=port_id,
client=self.floating_ips_client_admin,
tenant_id=self.floating_ips_client_admin.tenant_id)
self.floating_ips.setdefault(lb['id'], [])
self.floating_ips[lb['id']].append(floating_ip)
# Check for floating ip status before you check load-balancer
#
# We need the admin client here and this method utilizes the non-admin
# self.check_floating_ip_status(floating_ip, 'ACTIVE')
self.check_flip_status(floating_ip, 'ACTIVE')
def check_flip_status(self, floating_ip, status):
"""Verifies floatingip reaches the given status
:param dict floating_ip: floating IP dict to check status
:param status: target status
:raises AssertionError: if status doesn't match
"""
# TODO(ptoohill): Find a way to utilze the proper client method
floatingip_id = floating_ip['id']
def refresh():
result = (self.floating_ips_client_admin.
show_floatingip(floatingip_id)['floatingip'])
return status == result['status']
test_utils.call_until_true(refresh, 100, 1)
floating_ip = self.floating_ips_client_admin.show_floatingip(
floatingip_id)['floatingip']
self.assertEqual(status, floating_ip['status'],
message="FloatingIP: {fp} is at status: {cst}. "
"failed to reach status: {st}"
.format(fp=floating_ip, cst=floating_ip['status'],
st=status))
LOG.info('FloatingIP: %(fp)s is at status: %(st)s',
{'fp': floating_ip, 'st': status})
def _create_load_balancer(self, ip_version=4, persistence_type=None):
"""Create a load balancer.
Also assigns a floating IP to the created load balancer.
:param ip_version: IP version to be used for the VIP IP
:returns: ID of the created load balancer
"""
self.create_lb_kwargs = {
'vip': {'subnet_id': self.subnet['id']},
'project_id': self.load_balancers_client.tenant_id}
self.load_balancer = self.load_balancers_client.create_load_balancer(
**self.create_lb_kwargs)
lb_id = self.load_balancer['id']
self.addCleanup(self._cleanup_load_balancer, lb_id)
LOG.info('Waiting for lb status on create load balancer id: %s', lb_id)
self.load_balancer = self._wait_for_load_balancer_status(
load_balancer_id=lb_id,
provisioning_status='ACTIVE',
operating_status='ONLINE')
self.vip_ip = self.load_balancer['vip'].get('ip_address')
# if the ipv4 is used for lb, then fetch the right values from
# tempest.conf file
if ip_version == 4:
if (config.network.public_network_id and not
config.network.project_networks_reachable):
load_balancer = self.load_balancer
self._assign_floating_ip_to_lb_vip(load_balancer)
self.vip_ip = self.floating_ips[
load_balancer['id']][0]['floating_ip_address']
# Currently the ovs-agent is not enforcing security groups on the
# vip port - see https://bugs.launchpad.net/neutron/+bug/1163569
# However the linuxbridge-agent does, and it is necessary to add a
# security group with a rule that allows tcp port 80 to the vip port.
self.ports_client_admin.update_port(
self.load_balancer['vip']['port_id'],
security_groups=[self.security_group['id']])
return lb_id
def _create_load_balancer_over_quota(self):
"""Attempt to create a load balancer over quota.
Creates two load balancers one after the other expecting
the second create to exceed the configured quota.
:returns: Response body from the request
"""
self.create_lb_kwargs = {
'vip': {'subnet_id': self.subnet['id']},
'project_id': self.load_balancers_client.tenant_id}
self.load_balancer = self.load_balancers_client.create_load_balancer(
**self.create_lb_kwargs)
lb_id = self.load_balancer['id']
self.addCleanup(self._cleanup_load_balancer, lb_id)
self.create_lb_kwargs = {
'vip': {'subnet_id': self.subnet['id']},
'project_id': self.load_balancers_client.tenant_id}
lb_client = self.load_balancers_client
lb_client.create_load_balancer_over_quota(
**self.create_lb_kwargs)
LOG.info('Waiting for lb status on create load balancer id: %s',
lb_id)
self.load_balancer = self._wait_for_load_balancer_status(
load_balancer_id=lb_id,
provisioning_status='ACTIVE',
operating_status='ONLINE')
def _wait_for_load_balancer_status(self, load_balancer_id,
provisioning_status='ACTIVE',
operating_status='ONLINE',
delete=False):
interval_time = config.octavia.lb_build_interval
timeout = config.octavia.lb_build_timeout
end_time = time.time() + timeout
while time.time() < end_time:
try:
lb = self.load_balancers_client.get_load_balancer(
load_balancer_id)
except lib_exc.NotFound as e:
if delete:
return
else:
raise e
LOG.info('provisioning_status: %s operating_status: %s',
lb.get('provisioning_status'),
lb.get('operating_status'))
if delete and lb.get('provisioning_status') == 'DELETED':
break
elif (lb.get('provisioning_status') == provisioning_status and
lb.get('operating_status') == operating_status):
break
elif (lb.get('provisioning_status') == 'ERROR' or
lb.get('operating_status') == 'ERROR'):
raise Exception(
_("Wait for load balancer for load balancer: {lb_id} "
"ran for {timeout} seconds and an ERROR was encountered "
"with provisioning status: {provisioning_status} and "
"operating status: {operating_status}").format(
timeout=timeout,
lb_id=lb.get('id'),
provisioning_status=provisioning_status,
operating_status=operating_status))
time.sleep(interval_time)
else:
raise Exception(
_("Wait for load balancer ran for {timeout} seconds and did "
"not observe {lb_id} reach {provisioning_status} "
"provisioning status and {operating_status} "
"operating status.").format(
timeout=timeout,
lb_id=lb.get('id'),
provisioning_status=provisioning_status,
operating_status=operating_status))
return lb
def _wait_for_pool_session_persistence(self, pool_id, sp_type=None):
interval_time = config.octavia.build_interval
timeout = config.octavia.build_timeout
end_time = time.time() + timeout
while time.time() < end_time:
pool = self.pools_client.get_pool(self.load_balancer['id'],
pool_id)
sp = pool.get('session_persistence', None)
if (not (sp_type or sp) or
pool['session_persistence']['type'] == sp_type):
return pool
time.sleep(interval_time)
raise Exception(
_("Wait for pool ran for {timeout} seconds and did "
"not observe {pool_id} update session persistence type "
"to {type}.").format(
timeout=timeout,
pool_id=pool_id,
type=sp_type))
def _check_members_balanced(self, members=None):
"""Check that back-end members are load balanced.
1. Send requests on the floating ip associated with the VIP
2. Check that the requests are shared between the members given
3. Check that no unexpected members were balanced.
"""
members = members or ['server1_0', 'server1_1']
members = list(map(
lambda x: six.b(x) if type(x) == six.text_type else x, members))
LOG.info(_('Checking all members are balanced...'))
self._wait_for_http_service(self.vip_ip)
LOG.info(_('Connection to %(vip)s is valid'), {'vip': self.vip_ip})
counters = self._send_concurrent_requests(self.vip_ip)
for member, counter in six.iteritems(counters):
LOG.info(_('Member %(member)s saw %(counter)s requests.'),
{'member': member, 'counter': counter})
self.assertGreater(counter, 0,
'Member %s never balanced' % member)
for member in members:
if member not in list(counters):
raise Exception(
_("Member {member} was never balanced.").format(
member=member))
for member in list(counters):
if member not in members:
raise Exception(
_("Member {member} was balanced when it should not "
"have been.").format(member=member))
LOG.info(_('Done checking all members are balanced...'))
def _wait_for_http_service(self, check_ip, port=80):
def try_connect(check_ip, port):
try:
LOG.info('checking connection to ip: %s port: %d',
check_ip, port)
resp = urllib2.urlopen("http://{0}:{1}/".format(check_ip,
port))
if resp.getcode() == 200:
return True
return False
except IOError as e:
LOG.info('Got IOError in check connection: %s', e)
return False
except error.HTTPError as e:
LOG.info('Got HTTPError in check connection: %s', e)
return False
timeout = config.validation.ping_timeout
start = time.time()
while not try_connect(check_ip, port):
if (time.time() - start) > timeout:
message = "Timed out trying to connect to %s" % check_ip
raise lib_exc.TimeoutException(message)
time.sleep(1)
def _send_requests(self, vip_ip, path=''):
counters = dict()
for i in range(self.num):
try:
server = urllib2.urlopen("http://{0}/{1}".format(vip_ip, path),
None, 2).read()
if server not in counters:
counters[server] = 1
else:
counters[server] += 1
# HTTP exception means fail of server, so don't increase counter
# of success and continue connection tries
except (error.HTTPError, error.URLError,
socket.timeout, socket.error) as e:
LOG.info('Got Error in sending request: %s', e)
continue
return counters
def _send_concurrent_requests(self, vip_ip, path='', clients=5,
timeout=None):
class ClientThread(threading.Thread):
def __init__(self, test_case, cid, vip_ip, path=''):
super(ClientThread, self).__init__(
name='ClientThread-{0}'.format(cid))
self.vip_ip = vip_ip
self.path = path
self.test_case = test_case
self.counters = dict()
def run(self):
# NOTE(dlundquist): _send_requests() does not mutate
# BaseTestCase so concurrent uses of _send_requests does not
# require a mutex.
self.counters = self.test_case._send_requests(self.vip_ip,
path=self.path)
def join(self, timeout=None):
start = time.time()
super(ClientThread, self).join(timeout)
return time.time() - start
client_threads = [ClientThread(self, i, vip_ip, path=path)
for i in range(clients)]
for ct in client_threads:
ct.start()
if timeout is None:
# timeout for all client threads defaults to 400ms per request
timeout = self.num * 0.4
total_counters = dict()
for ct in client_threads:
timeout -= ct.join(timeout)
if timeout <= 0:
LOG.error('Client thread %s timed out', ct.name)
return dict()
for server in list(ct.counters):
if server not in total_counters:
total_counters[server] = 0
total_counters[server] += ct.counters[server]
return total_counters
def _check_load_balancing_after_deleting_resources(self):
"""Check load balancer after deleting resources
Assert that no traffic is sent to any backend servers
"""
counters = self._send_requests(self.vip_ip)
if counters:
for server, counter in six.iteritems(counters):
self.assertEqual(
counter, 0,
'Server %s saw requests when it should not have' % server)
def _check_source_ip_persistence(self):
"""Check source ip session persistence.
Verify that all requests from our ip are answered by the same server
that handled it the first time.
"""
# Check that backends are reachable
self._wait_for_http_service(self.vip_ip)
resp = []
for count in range(10):
resp.append(
urllib2.urlopen("http://{0}/".format(self.vip_ip)).read())
self.assertEqual(len(set(resp)), 1)
def _update_pool_session_persistence(self, persistence_type=None,
cookie_name=None):
"""Update a pool with new session persistence type and cookie name."""
update_data = {}
if persistence_type:
update_data = {"session_persistence": {
"type": persistence_type}}
if cookie_name:
update_data['session_persistence'].update(
{"cookie_name": cookie_name})
self.pools_client.update_pool(self.load_balancer['id'],
self.pool['id'], **update_data)
self.pool = self._wait_for_pool_session_persistence(
self.load_balancer['id'],
self.pool['id'],
persistence_type)
self._wait_for_load_balancer_status(self.load_balancer['id'])
if persistence_type:
self.assertEqual(persistence_type,
self.pool['session_persistence']['type'])
if cookie_name:
self.assertEqual(cookie_name,
self.pool['session_persistence']['cookie_name'])
def _check_cookie_session_persistence(self):
"""Check cookie persistence types by injecting cookies in requests."""
# Send first request and get cookie from the server's response
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
opener.open("http://{0}/".format(self.vip_ip))
resp = []
# Send 10 subsequent requests with the cookie inserted in the headers.
for count in range(10):
request = urllib2.Request("http://{0}/".format(self.vip_ip))
cj.add_cookie_header(request)
response = urllib2.urlopen(request)
resp.append(response.read())
self.assertEqual(len(set(resp)), 1, message=resp)
def _create_floating_ip(self, thing, external_network_id=None,
port_id=None, client=None, tenant_id=None):
"""Create a floating IP and associate to a resource/port on Neutron."""
if not tenant_id:
try:
tenant_id = thing['tenant_id']
except Exception:
# Thing probably migrated to project_id, grab that...
tenant_id = thing['project_id']
if not external_network_id:
external_network_id = config.network.public_network_id
if not client:
client = self.floating_ips_client
if not port_id:
port_id, ip4 = self._get_server_port_id_and_ip4(thing)
else:
ip4 = None
result = client.create_floatingip(
floating_network_id=external_network_id,
port_id=port_id,
tenant_id=tenant_id,
fixed_ip_address=ip4
)
floating_ip = result['floatingip']
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
self.floating_ips_client.delete_floatingip,
floating_ip['id'])
return floating_ip
def copy_file_to_host(self, file_from, dest, host, username, pkey):
dest = "%s@%s:%s" % (username, host, dest)
cmd = ("scp -v -o UserKnownHostsFile=/dev/null "
"-o StrictHostKeyChecking=no "
"-i %(pkey)s %(file1)s %(dest)s" % {'pkey': pkey,
'file1': file_from,
'dest': dest})
return self.execute(cmd)
def execute(self, cmd, cwd=None):
args = shlex.split(cmd)
subprocess_args = {'stdout': subprocess.PIPE,
'stderr': subprocess.STDOUT,
'cwd': cwd}
proc = subprocess.Popen(args, **subprocess_args)
stdout, stderr = proc.communicate()
if proc.returncode != 0:
LOG.error('Command %s returned with exit status %s,output %s, '
'error %s', cmd, proc.returncode, stdout, stderr)
return stdout
def _set_quotas(self, project_id=None, load_balancer=20, listener=20,
pool=20, health_monitor=20, member=20):
if not project_id:
project_id = self.networks_client.tenant_id
body = {'quota': {
'load_balancer': load_balancer, 'listener': listener,
'pool': pool, 'health_monitor': health_monitor, 'member': member}}
return self.quotas_client.update_quotas(project_id, **body)
def _create_load_balancer_tree(self, ip_version=4, cleanup=True):
# TODO(ptoohill): remove or null out project ID when Octavia supports
# keystone auth and automatically populates it for us.
project_id = self.networks_client.tenant_id
create_members = self._create_members_kwargs(self.subnet['id'])
create_pool = {'project_id': project_id,
'lb_algorithm': 'ROUND_ROBIN',
'protocol': 'HTTP',
'members': create_members}
create_listener = {'project_id': project_id,
'protocol': 'HTTP',
'protocol_port': 80,
'default_pool': create_pool}
create_lb = {'project_id': project_id,
'vip': {'subnet_id': self.subnet['id']},
'listeners': [create_listener]}
# Set quotas back and finish the test
self._set_quotas(project_id=project_id)
self.load_balancer = (self.load_balancers_client
.create_load_balancer_graph(create_lb))
load_balancer_id = self.load_balancer['id']
if cleanup:
self.addCleanup(self._cleanup_load_balancer, load_balancer_id)
LOG.info('Waiting for lb status on create load balancer id: %s',
load_balancer_id)
self.load_balancer = self._wait_for_load_balancer_status(
load_balancer_id)
self.vip_ip = self.load_balancer['vip'].get('ip_address')
# if the ipv4 is used for lb, then fetch the right values from
# tempest.conf file
if ip_version == 4:
if (config.network.public_network_id and
not config.network.project_networks_reachable):
load_balancer = self.load_balancer
self._assign_floating_ip_to_lb_vip(load_balancer)
self.vip_ip = self.floating_ips[
load_balancer['id']][0]['floating_ip_address']
# Currently the ovs-agent is not enforcing security groups on the
# vip port - see https://bugs.launchpad.net/neutron/+bug/1163569
# However the linuxbridge-agent does, and it is necessary to add a
# security group with a rule that allows tcp port 80 to the vip port.
self.ports_client_admin.update_port(
self.load_balancer['vip']['port_id'],
security_groups=[self.security_group['id']])
def _create_members_kwargs(self, subnet_id=None):
"""Create one or more Members
In case there is only one server, create both members with the same ip
but with different ports to listen on.
"""
create_member_kwargs = []
for server_id, ip in six.iteritems(self.server_fixed_ips):
create_member_kwargs.append({'ip_address': ip,
'protocol_port': 80,
'weight': 50,
'subnet_id': subnet_id})
return create_member_kwargs

View File

@ -1,34 +0,0 @@
# Copyright 2017 Rackspace, US Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from tempest.common import utils
from tempest.lib import decorators
from octavia.tests.tempest.v1.scenario import base
class TestLoadBalancerQuota(base.BaseTestCase):
"""This tests attempts to exceed a set load balancer quota.
The following is the scenario outline:
1. Set the load balancer quota to one.
2. Create two load balancers, expecting the second create to fail
with a quota exceeded code.
"""
@utils.services('compute', 'network')
@decorators.skip_because(bug="1656110")
def test_load_balancer_quota(self):
self._set_quotas(project_id=None, load_balancer=1)
self._create_load_balancer_over_quota()

View File

@ -1,46 +0,0 @@
# Copyright 2016 Rackspace US Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from tempest.common import utils
from octavia.tests.tempest.v1.scenario import base
class TestListenerBasic(base.BaseTestCase):
"""This test checks basic listener functionality.
The following is the scenario outline:
1. Create an instance.
2. SSH to the instance and start two web server processes.
3. Create a load balancer, listener, pool and two members and with
ROUND_ROBIN algorithm. Associate the VIP with a floating ip.
4. Send NUM requests to the floating ip and check that they are shared
between the two web server processes.
5. Delete listener and validate the traffic is not sent to any members.
"""
@utils.services('compute', 'network')
def test_load_balancer_basic(self):
self._create_server('server1')
self._start_backend_httpd_processes('server1')
lb_id = self._create_load_balancer()
pool = self._create_pool(lb_id)
listener = self._create_listener(lb_id, default_pool_id=pool['id'])
self._create_members(lb_id, pool['id'], 'server1',
subnet_id=self.subnet['id'])
self._check_members_balanced(['server1_0', 'server1_1'])
self._cleanup_pool(pool['id'], lb_id)
self._cleanup_listener(listener['id'], lb_id)
self._check_load_balancing_after_deleting_resources()

View File

@ -1,46 +0,0 @@
# Copyright 2016 Rackspace US Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from tempest.common import utils
from tempest import config
from octavia.tests.tempest.v1.scenario import base
config = config.CONF
class TestLoadBalancerTreeMinimal(base.BaseTestCase):
@utils.services('compute', 'network')
def test_load_balancer_tree_minimal(self):
"""This test checks basic load balancing.
The following is the scenario outline:
1. Create an instance.
2. SSH to the instance and start two servers.
3. Create a load balancer graph with two members and with ROUND_ROBIN
algorithm.
4. Associate the VIP with a floating ip.
5. Send NUM requests to the floating ip and check that they are shared
between the two servers.
"""
self._create_server('server1')
self._start_backend_httpd_processes('server1')
self._create_server('server2')
self._start_backend_httpd_processes('server2')
self._create_load_balancer_tree(cleanup=False)
self._check_members_balanced(['server1_0', 'server2_0'])
self._delete_load_balancer_cascade(self.load_balancer.get('id'))

View File

@ -1,11 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,11 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,59 +0,0 @@
# Copyright 2016 Rackspace US Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_serialization import jsonutils
from six.moves.urllib import parse
from tempest.lib.common import rest_client
class QuotasClient(rest_client.RestClient):
"""Tests Quotas API."""
_QUOTAS_URL = "v2/lbaas/quotas/{project_id}"
def list_quotas(self, params=None):
"""List all non-default quotas."""
url = "v2/lbaas/quotas"
if params:
url = '{0}?{1}'.format(url, parse.urlencode(params))
resp, body = self.get(url)
body = jsonutils.loads(body)
self.expected_success(200, resp.status)
return rest_client.ResponseBodyList(resp, body)
def get_quotas(self, project_id, params=None):
"""Get Quotas for a project."""
url = self._QUOTAS_URL.format(project_id=project_id)
if params:
url = '{0}?{1}'.format(url, parse.urlencode(params))
resp, body = self.get(url)
body = jsonutils.loads(body)
self.expected_success(200, resp.status)
return rest_client.ResponseBody(resp, body)
def update_quotas(self, project_id, **kwargs):
"""Update a Quotas for a project."""
url = self._QUOTAS_URL.format(project_id=project_id)
put_body = jsonutils.dumps(kwargs)
resp, body = self.put(url, put_body)
body = jsonutils.loads(body)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)
def delete_quotas(self, project_id):
"""Delete an Quotas for a project."""
url = self._QUOTAS_URL.format(project_id=project_id)
resp, body = self.delete(url)
self.expected_success(202, resp.status)
return rest_client.ResponseBody(resp, body)

View File

@ -1,11 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,903 +0,0 @@
# Copyright 2016 Rackspace Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
try:
from http import cookiejar as cookielib
except ImportError:
import cookielib
import os
import shlex
import shutil
import socket
import subprocess
import tempfile
import threading
import time
from oslo_log import log as logging
import six
from six.moves.urllib import error
from six.moves.urllib import request as urllib2
from tempest import clients
from tempest.common import credentials_factory
from tempest.common import utils
from tempest.common import waiters
from tempest import config
from tempest import exceptions
from tempest.lib.common.utils import test_utils
from tempest.lib import exceptions as lib_exc
from octavia.i18n import _
from octavia.tests.tempest.common import manager
from octavia.tests.tempest.v1.clients import health_monitors_client
from octavia.tests.tempest.v1.clients import listeners_client
from octavia.tests.tempest.v1.clients import load_balancers_client
from octavia.tests.tempest.v1.clients import members_client
from octavia.tests.tempest.v1.clients import pools_client
from octavia.tests.tempest.v2.clients import quotas_client
config = config.CONF
LOG = logging.getLogger(__name__)
HTTPD_SRC = os.path.abspath(
os.path.join(os.path.dirname(__file__),
'../../../contrib/httpd.go'))
class BaseTestCase(manager.NetworkScenarioTest):
def setUp(self):
super(BaseTestCase, self).setUp()
self.servers_keypairs = {}
self.servers = {}
self.members = []
self.floating_ips = {}
self.servers_floating_ips = {}
self.server_ips = {}
self.start_port = 80
self.num = 50
self.server_fixed_ips = {}
self._create_security_group_for_test()
self._set_net_and_subnet()
mgr = self.get_client_manager()
auth_provider = mgr.auth_provider
region = config.network.region or config.identity.region
self.client_args = [auth_provider, 'load-balancer', region]
self.load_balancers_client = (
load_balancers_client.LoadBalancersClient(*self.client_args))
self.listeners_client = (
listeners_client.ListenersClient(*self.client_args))
self.pools_client = pools_client.PoolsClient(*self.client_args)
self.members_client = members_client.MembersClient(
*self.client_args)
self.health_monitors_client = (
health_monitors_client.HealthMonitorsClient(
*self.client_args))
self.quotas_client = quotas_client.QuotasClient(*self.client_args)
# admin network client needed for assigning octavia port to flip
os_admin = clients.Manager(
credentials_factory.get_configured_admin_credentials())
os_admin.auth_provider.fill_credentials()
self.floating_ips_client_admin = os_admin.floating_ips_client
self.ports_client_admin = os_admin.ports_client
@classmethod
def skip_checks(cls):
super(BaseTestCase, cls).skip_checks()
cfg = config.network
if not utils.is_extension_enabled('lbaasv2', 'network'):
msg = 'LBaaS Extension is not enabled'
raise cls.skipException(msg)
if not (cfg.project_networks_reachable or cfg.public_network_id):
msg = ('Either project_networks_reachable must be "true", or '
'public_network_id must be defined.')
raise cls.skipException(msg)
def _set_net_and_subnet(self):
"""Set Network and Subnet
Query and set appropriate network and subnet attributes to be used
for the test. Existing tenant networks are used if they are found.
The configured private network and associated subnet is used as a
fallback in absence of tenant networking.
"""
try:
tenant_net = self.os_admin.networks_client.list_networks(
tenant_id=self.tenant_id)['networks'][0]
except IndexError:
tenant_net = None
if tenant_net:
tenant_subnet = self.os_admin.subnets_client.list_subnets(
tenant_id=self.tenant_id)['subnets'][0]
self.subnet = tenant_subnet
self.network = tenant_net
else:
self.network = self._get_network_by_name(
config.compute.fixed_network_name)
# We are assuming that the first subnet associated
# with the fixed network is the one we want. In the future, we
# should instead pull a subnet id from config, which is set by
# devstack/admin/etc.
subnet = self.os_admin.subnets_client.list_subnets(
network_id=self.network['id'])['subnets'][0]
self.subnet = subnet
def _create_security_group_for_test(self):
self.security_group = self._create_security_group(
tenant_id=self.tenant_id)
self._create_security_group_rules_for_port(self.start_port)
self._create_security_group_rules_for_port(self.start_port + 1)
def _create_security_group_rules_for_port(self, port):
rule = {
'direction': 'ingress',
'protocol': 'tcp',
'port_range_min': port,
'port_range_max': port,
}
self._create_security_group_rule(
secgroup=self.security_group,
tenant_id=self.tenant_id,
**rule)
def _ipv6_subnet(self, address6_mode):
router = self._get_router(tenant_id=self.tenant_id)
self.network = self._create_network(tenant_id=self.tenant_id)
self.subnet = self._create_subnet(network=self.network,
namestart='sub6',
ip_version=6,
ipv6_ra_mode=address6_mode,
ipv6_address_mode=address6_mode)
self.subnet.add_to_router(router_id=router['id'])
self.addCleanup(self.subnet.delete)
def _create_server(self, name):
keypair = self.create_keypair()
security_groups = [{'name': self.security_group['name']}]
create_kwargs = {
'networks': [
{'uuid': self.network['id']},
],
'key_name': keypair['name'],
'security_groups': security_groups,
}
net_name = self.network['name']
server = self.create_server(name=name, **create_kwargs)
waiters.wait_for_server_status(self.servers_client,
server['id'], 'ACTIVE')
server = self.servers_client.show_server(server['id'])
server = server['server']
self.servers_keypairs[server['id']] = keypair
LOG.info('servers_keypairs looks like this(format): %s',
self.servers_keypairs)
if (config.network.public_network_id and not
config.network.project_networks_reachable):
public_network_id = config.network.public_network_id
floating_ip = self._create_floating_ip(
server, public_network_id)
self.floating_ips[floating_ip['id']] = server
self.server_ips[server['id']] = floating_ip['floating_ip_address']
else:
self.server_ips[server['id']] = (
server['addresses'][net_name][0]['addr'])
self.server_fixed_ips[server['id']] = (
server['addresses'][net_name][0]['addr'])
self.assertTrue(self.servers_keypairs)
self.servers[name] = server['id']
return server
def _create_servers(self, num=2):
for count in range(num):
name = "server%s" % (count + 1)
self.server = self._create_server(name=name)
self.assertEqual(len(self.servers_keypairs), num)
def _stop_server(self, name):
for sname, value in six.iteritems(self.servers):
if sname == name:
LOG.info('STOPPING SERVER: %s', sname)
self.servers_client.stop_server(value)
waiters.wait_for_server_status(self.servers_client,
value, 'SHUTOFF')
LOG.info('STOPPING SERVER COMPLETED!')
def _start_server(self, name):
for sname, value in six.iteritems(self.servers):
if sname == name:
self.servers_client.start(value)
waiters.wait_for_server_status(self.servers_client,
value, 'ACTIVE')
def _build_static_httpd(self):
"""Compile test httpd as a static binary
returns file path of resulting binary file
"""
builddir = tempfile.mkdtemp()
shutil.copyfile(HTTPD_SRC, os.path.join(builddir, 'httpd.go'))
self.execute('go build -ldflags '
'"-linkmode external -extldflags -static" '
'httpd.go', cwd=builddir)
return os.path.join(builddir, 'httpd')
def _start_backend_httpd_processes(self, backend, ports=None):
"""Start one or more webservers on a given backend server
1. SSH to the backend
2. Start http backends listening on the given ports
"""
ports = ports or [80, 81]
httpd = self._build_static_httpd()
backend_id = self.servers[backend]
for server_id, ip in six.iteritems(self.server_ips):
if server_id != backend_id:
continue
private_key = self.servers_keypairs[server_id]['private_key']
username = config.validation.image_ssh_user
ssh_client = self.get_remote_client(
ip_address=ip,
private_key=private_key)
with tempfile.NamedTemporaryFile() as key:
key.write(private_key)
key.flush()
self.copy_file_to_host(httpd,
"/dev/shm/httpd",
ip,
username, key.name)
# Start httpd
start_server = ('sudo sh -c "ulimit -n 100000; screen -d -m '
'/dev/shm/httpd -id %(id)s -port %(port)s"')
for i in range(len(ports)):
cmd = start_server % {'id': backend + "_" + str(i),
'port': ports[i]}
ssh_client.exec_command(cmd)
# Allow ssh_client connection to fall out of scope
def _create_listener(self, load_balancer_id, default_pool_id=None):
"""Create a listener with HTTP protocol listening on port 80."""
self.create_listener_kwargs = {'protocol': 'HTTP',
'protocol_port': 80,
'default_pool_id': default_pool_id}
self.listener = self.listeners_client.create_listener(
lb_id=load_balancer_id,
**self.create_listener_kwargs)
self.assertTrue(self.listener)
self.addCleanup(self._cleanup_listener, self.listener['id'],
load_balancer_id)
LOG.info('Waiting for lb status on create listener id: %s',
self.listener['id'])
self._wait_for_load_balancer_status(load_balancer_id)
return self.listener
def _create_health_monitor(self):
"""Create a HTTP health monitor."""
self.hm = self.health_monitors_client.create_health_monitor(
type='HTTP', delay=3, timeout=5,
fall_threshold=5, rise_threshold=5,
lb_id=self.load_balancer['id'],
pool_id=self.pool['id'])
self.assertTrue(self.hm)
self.addCleanup(self._cleanup_health_monitor,
pool_id=self.pool['id'],
load_balancer_id=self.load_balancer['id'])
self._wait_for_load_balancer_status(self.load_balancer['id'])
# add clean up members prior to clean up of health monitor
# see bug 1547609
members = self.members_client.list_members(self.load_balancer['id'],
self.pool['id'])
self.assertTrue(members)
for member in members:
self.addCleanup(self._cleanup_member,
load_balancer_id=self.load_balancer['id'],
pool_id=self.pool['id'],
member_id=member['id'])
def _create_pool(self, load_balancer_id,
persistence_type=None, cookie_name=None):
"""Create a pool with ROUND_ROBIN algorithm."""
create_pool_kwargs = {
'lb_algorithm': 'ROUND_ROBIN',
'protocol': 'HTTP'
}
if persistence_type:
create_pool_kwargs.update(
{'session_persistence': {'type': persistence_type}})
if cookie_name:
create_pool_kwargs.update(
{'session_persistence': {'cookie_name': cookie_name}})
self.pool = self.pools_client.create_pool(lb_id=load_balancer_id,
**create_pool_kwargs)
self.assertTrue(self.pool)
self.addCleanup(self._cleanup_pool, self.pool['id'], load_balancer_id)
LOG.info('Waiting for lb status on create pool id: %s',
self.pool['id'])
self._wait_for_load_balancer_status(load_balancer_id)
return self.pool
def _cleanup_load_balancer(self, load_balancer_id):
test_utils.call_and_ignore_notfound_exc(
self.load_balancers_client.delete_load_balancer, load_balancer_id)
self._wait_for_load_balancer_status(load_balancer_id, delete=True)
def _cleanup_listener(self, listener_id, load_balancer_id=None):
test_utils.call_and_ignore_notfound_exc(
self.listeners_client.delete_listener, load_balancer_id,
listener_id)
if load_balancer_id:
self._wait_for_load_balancer_status(load_balancer_id, delete=True)
def _cleanup_pool(self, pool_id, load_balancer_id=None):
test_utils.call_and_ignore_notfound_exc(
self.pools_client.delete_pool, load_balancer_id, pool_id)
if load_balancer_id:
self._wait_for_load_balancer_status(load_balancer_id, delete=True)
def _cleanup_health_monitor(self, hm_id, load_balancer_id=None):
test_utils.call_and_ignore_notfound_exc(
self.health_monitors_client.delete_health_monitor, hm_id)
if load_balancer_id:
self._wait_for_load_balancer_status(load_balancer_id, delete=True)
def _create_members(self, load_balancer_id, pool_id, backend, ports=None,
subnet_id=None):
"""Create one or more Members based on the given backend
:backend: The backend server the members will be on
:param ports: List of listening ports on the backend server
"""
ports = ports or [80, 81]
backend_id = self.servers[backend]
for server_id, ip in six.iteritems(self.server_fixed_ips):
if server_id != backend_id:
continue
for port in ports:
create_member_kwargs = {
'ip_address': ip,
'protocol_port': port,
'weight': 50,
'subnet_id': subnet_id
}
member = self.members_client.create_member(
lb_id=load_balancer_id,
pool_id=pool_id,
**create_member_kwargs)
LOG.info('Waiting for lb status on create member...')
self._wait_for_load_balancer_status(load_balancer_id)
self.members.append(member)
self.assertTrue(self.members)
def _assign_floating_ip_to_lb_vip(self, lb):
public_network_id = config.network.public_network_id
LOG.info('assign_floating_ip_to_lb_vip lb: %s type: %s', lb, type(lb))
port_id = lb['vip']['port_id']
floating_ip = self._create_floating_ip(
thing=lb,
external_network_id=public_network_id,
port_id=port_id,
client=self.floating_ips_client_admin,
tenant_id=self.floating_ips_client_admin.tenant_id)
self.floating_ips.setdefault(lb['id'], [])
self.floating_ips[lb['id']].append(floating_ip)
# Check for floating ip status before you check load-balancer
#
# We need the admin client here and this method utilizes the non-admin
# self.check_floating_ip_status(floating_ip, 'ACTIVE')
self.check_flip_status(floating_ip, 'ACTIVE')
def check_flip_status(self, floating_ip, status):
"""Verifies floatingip reaches the given status
:param dict floating_ip: floating IP dict to check status
:param status: target status
:raises AssertionError: if status doesn't match
"""
# TODO(ptoohill): Find a way to utilze the proper client method
floatingip_id = floating_ip['id']
def refresh():
result = (self.floating_ips_client_admin.
show_floatingip(floatingip_id)['floatingip'])
return status == result['status']
test_utils.call_until_true(refresh, 100, 1)
floating_ip = self.floating_ips_client_admin.show_floatingip(
floatingip_id)['floatingip']
self.assertEqual(status, floating_ip['status'],
message="FloatingIP: {fp} is at status: {cst}. "
"failed to reach status: {st}"
.format(fp=floating_ip, cst=floating_ip['status'],
st=status))
LOG.info('FloatingIP: %(fp)s is at status: %(st)s',
{'fp': floating_ip, 'st': status})
def _create_load_balancer(self, ip_version=4, persistence_type=None):
"""Create a load balancer.
Also assigns a floating IP to the created load balancer.
:param ip_version: IP version to be used for the VIP IP
:returns: ID of the created load balancer
"""
self.create_lb_kwargs = {'vip': {'subnet_id': self.subnet['id']}}
self.load_balancer = self.load_balancers_client.create_load_balancer(
**self.create_lb_kwargs)
lb_id = self.load_balancer['id']
self.addCleanup(self._cleanup_load_balancer, lb_id)
LOG.info('Waiting for lb status on create load balancer id: %s', lb_id)
self.load_balancer = self._wait_for_load_balancer_status(
load_balancer_id=lb_id,
provisioning_status='ACTIVE',
operating_status='ONLINE')
self.vip_ip = self.load_balancer['vip'].get('ip_address')
# if the ipv4 is used for lb, then fetch the right values from
# tempest.conf file
if ip_version == 4:
if (config.network.public_network_id and not
config.network.project_networks_reachable):
load_balancer = self.load_balancer
self._assign_floating_ip_to_lb_vip(load_balancer)
self.vip_ip = self.floating_ips[
load_balancer['id']][0]['floating_ip_address']
# Currently the ovs-agent is not enforcing security groups on the
# vip port - see https://bugs.launchpad.net/neutron/+bug/1163569
# However the linuxbridge-agent does, and it is necessary to add a
# security group with a rule that allows tcp port 80 to the vip port.
self.ports_client_admin.update_port(
self.load_balancer['vip']['port_id'],
security_groups=[self.security_group['id']])
return lb_id
def _create_load_balancer_over_quota(self):
"""Attempt to create a load balancer over quota.
Creates two load balancers one after the other expecting
the second create to exceed the configured quota.
:returns: Response body from the request
"""
self.create_lb_kwargs = {
'vip': {'subnet_id': self.subnet['id']},
'project_id': self.load_balancers_client.tenant_id}
self.load_balancer = self.load_balancers_client.create_load_balancer(
**self.create_lb_kwargs)
lb_id = self.load_balancer['id']
self.addCleanup(self._cleanup_load_balancer, lb_id)
self.create_lb_kwargs = {
'vip': {'subnet_id': self.subnet['id']},
'project_id': self.load_balancers_client.tenant_id}
lb_client = self.load_balancers_client
lb_client.create_load_balancer_over_quota(
**self.create_lb_kwargs)
LOG.info('Waiting for lb status on create load balancer id: %s', lb_id)
self.load_balancer = self._wait_for_load_balancer_status(
load_balancer_id=lb_id,
provisioning_status='ACTIVE',
operating_status='ONLINE')
def _wait_for_load_balancer_status(self, load_balancer_id,
provisioning_status='ACTIVE',
operating_status='ONLINE',
delete=False):
interval_time = config.octavia.lb_build_interval
timeout = config.octavia.lb_build_timeout
end_time = time.time() + timeout
while time.time() < end_time:
try:
lb = self.load_balancers_client.get_load_balancer(
load_balancer_id)
except lib_exc.NotFound as e:
if delete:
return
else:
raise e
LOG.info('provisioning_status: %s operating_status: %s',
lb.get('provisioning_status'),
lb.get('operating_status'))
if delete and lb.get('provisioning_status') == 'DELETED':
break
elif (lb.get('provisioning_status') == provisioning_status and
lb.get('operating_status') == operating_status):
break
elif (lb.get('provisioning_status') == 'ERROR' or
lb.get('operating_status') == 'ERROR'):
raise Exception(
_("Wait for load balancer for load balancer: {lb_id} "
"ran for {timeout} seconds and an ERROR was encountered "
"with provisioning status: {provisioning_status} and "
"operating status: {operating_status}").format(
timeout=timeout,
lb_id=lb.get('id'),
provisioning_status=provisioning_status,
operating_status=operating_status))
time.sleep(interval_time)
else:
raise Exception(
_("Wait for load balancer ran for {timeout} seconds and did "
"not observe {lb_id} reach {provisioning_status} "
"provisioning status and {operating_status} "
"operating status.").format(
timeout=timeout,
lb_id=lb.get('id'),
provisioning_status=provisioning_status,
operating_status=operating_status))
return lb
def _wait_for_pool_session_persistence(self, pool_id, sp_type=None):
interval_time = config.octavia.build_interval
timeout = config.octavia.build_timeout
end_time = time.time() + timeout
while time.time() < end_time:
pool = self.pools_client.get_pool(self.load_balancer['id'],
pool_id)
sp = pool.get('session_persistence', None)
if (not (sp_type or sp) or
pool['session_persistence']['type'] == sp_type):
return pool
time.sleep(interval_time)
raise Exception(
_("Wait for pool ran for {timeout} seconds and did "
"not observe {pool_id} update session persistence type "
"to {type}.").format(
timeout=timeout,
pool_id=pool_id,
type=sp_type))
def _check_members_balanced(self, members=None):
"""Check that back-end members are load balanced.
1. Send requests on the floating ip associated with the VIP
2. Check that the requests are shared between the members given
3. Check that no unexpected members were balanced.
"""
members = members or ['server1_0', 'server1_1']
LOG.info(_('Checking all members are balanced...'))
self._wait_for_http_service(self.vip_ip)
LOG.info(_('Connection to %(vip)s is valid'), {'vip': self.vip_ip})
counters = self._send_concurrent_requests(self.vip_ip)
for member, counter in six.iteritems(counters):
LOG.info(_('Member %(member)s saw %(counter)s requests.'),
{'member': member, 'counter': counter})
self.assertGreater(counter, 0,
'Member %s never balanced' % member)
for member in members:
if member not in list(counters):
raise Exception(
_("Member {member} was never balanced.").format(
member=member))
for member in list(counters):
if member not in members:
raise Exception(
_("Member {member} was balanced when it should not "
"have been.").format(member=member))
LOG.info(_('Done checking all members are balanced...'))
def _wait_for_http_service(self, check_ip, port=80):
def try_connect(check_ip, port):
try:
LOG.info('checking connection to ip: %s port: %s',
check_ip, port)
resp = urllib2.urlopen("http://{0}:{1}/".format(check_ip,
port))
if resp.getcode() == 200:
return True
return False
except IOError as e:
LOG.info('Got IOError in check connection: %s', e)
return False
except error.HTTPError as e:
LOG.info('Got HTTPError in check connection: %s', e)
return False
timeout = config.validation.ping_timeout
start = time.time()
while not try_connect(check_ip, port):
if (time.time() - start) > timeout:
message = "Timed out trying to connect to %s" % check_ip
raise exceptions.TimeoutException(message)
time.sleep(1)
def _send_requests(self, vip_ip, path=''):
counters = dict()
for i in range(self.num):
try:
server = urllib2.urlopen("http://{0}/{1}".format(vip_ip, path),
None, 2).read()
if server not in counters:
counters[server] = 1
else:
counters[server] += 1
# HTTP exception means fail of server, so don't increase counter
# of success and continue connection tries
except (error.HTTPError, error.URLError,
socket.timeout, socket.error) as e:
LOG.info('Got Error in sending request: %s', e)
continue
return counters
def _send_concurrent_requests(self, vip_ip, path='', clients=5,
timeout=None):
class ClientThread(threading.Thread):
def __init__(self, test_case, cid, vip_ip, path=''):
super(ClientThread, self).__init__(
name='ClientThread-{0}'.format(cid))
self.vip_ip = vip_ip
self.path = path
self.test_case = test_case
self.counters = dict()
def run(self):
# NOTE(dlundquist): _send_requests() does not mutate
# BaseTestCase so concurrent uses of _send_requests does not
# require a mutex.
self.counters = self.test_case._send_requests(self.vip_ip,
path=self.path)
def join(self, timeout=None):
start = time.time()
super(ClientThread, self).join(timeout)
return time.time() - start
client_threads = [ClientThread(self, i, vip_ip, path=path)
for i in range(clients)]
for ct in client_threads:
ct.start()
if timeout is None:
# timeout for all client threads defaults to 400ms per request
timeout = self.num * 0.4
total_counters = dict()
for ct in client_threads:
timeout -= ct.join(timeout)
if timeout <= 0:
LOG.error("Client thread %s timed out", ct.name)
return dict()
for server in list(ct.counters):
if server not in total_counters:
total_counters[server] = 0
total_counters[server] += ct.counters[server]
return total_counters
def _traffic_validation_after_stopping_server(self):
"""Check that the requests are sent to the only ACTIVE server."""
LOG.info('Starting traffic_validation_after_stopping_server...')
counters = self._send_requests(self.vip_ip, ["server1", "server2"])
LOG.info('Counters is: %s', counters)
# Assert that no traffic is sent to server1.
for member, counter in six.iteritems(counters):
if member == 'server1':
self.assertEqual(counter, 0,
'Member %s is not balanced' % member)
def _check_load_balancing_after_deleting_resources(self):
"""Check load balancer after deleting resources
Assert that no traffic is sent to any backend servers
"""
counters = self._send_requests(self.vip_ip)
if counters:
for server, counter in six.iteritems(counters):
self.assertEqual(
counter, 0,
'Server %s saw requests when it should not have' % server)
def _check_source_ip_persistence(self):
"""Check source ip session persistence.
Verify that all requests from our ip are answered by the same server
that handled it the first time.
"""
# Check that backends are reachable
self._wait_for_http_service(self.vip_ip)
resp = []
for count in range(10):
resp.append(
urllib2.urlopen("http://{0}/".format(self.vip_ip)).read())
self.assertEqual(len(set(resp)), 1)
def _update_pool_session_persistence(self, persistence_type=None,
cookie_name=None):
"""Update a pool with new session persistence type and cookie name."""
update_data = {}
if persistence_type:
update_data = {"session_persistence": {
"type": persistence_type}}
if cookie_name:
update_data['session_persistence'].update(
{"cookie_name": cookie_name})
self.pools_client.update_pool(self.load_balancer['id'],
self.pool['id'], **update_data)
self.pool = self._wait_for_pool_session_persistence(
self.load_balancer['id'],
self.pool['id'],
persistence_type)
self._wait_for_load_balancer_status(self.load_balancer['id'])
if persistence_type:
self.assertEqual(persistence_type,
self.pool['session_persistence']['type'])
if cookie_name:
self.assertEqual(cookie_name,
self.pool['session_persistence']['cookie_name'])
def _check_cookie_session_persistence(self):
"""Check cookie persistence types by injecting cookies in requests."""
# Send first request and get cookie from the server's response
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
opener.open("http://{0}/".format(self.vip_ip))
resp = []
# Send 10 subsequent requests with the cookie inserted in the headers.
for count in range(10):
request = urllib2.Request("http://{0}/".format(self.vip_ip))
cj.add_cookie_header(request)
response = urllib2.urlopen(request)
resp.append(response.read())
self.assertEqual(len(set(resp)), 1, message=resp)
def _create_floating_ip(self, thing, external_network_id=None,
port_id=None, client=None, tenant_id=None):
"""Create a floating IP and associate to a resource/port on Neutron."""
if not tenant_id:
try:
tenant_id = thing['tenant_id']
except Exception:
# Thing probably migrated to project_id, grab that...
tenant_id = thing['project_id']
if not external_network_id:
external_network_id = config.network.public_network_id
if not client:
client = self.floating_ips_client
if not port_id:
port_id, ip4 = self._get_server_port_id_and_ip4(thing)
else:
ip4 = None
result = client.create_floatingip(
floating_network_id=external_network_id,
port_id=port_id,
tenant_id=tenant_id,
fixed_ip_address=ip4
)
floating_ip = result['floatingip']
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
self.floating_ips_client.delete_floatingip,
floating_ip['id'])
return floating_ip
def copy_file_to_host(self, file_from, dest, host, username, pkey):
dest = "%s@%s:%s" % (username, host, dest)
cmd = ("scp -v -o UserKnownHostsFile=/dev/null "
"-o StrictHostKeyChecking=no "
"-i %(pkey)s %(file1)s %(dest)s" % {'pkey': pkey,
'file1': file_from,
'dest': dest})
return self.execute(cmd)
def execute(self, cmd, cwd=None):
args = shlex.split(cmd.encode('utf-8'))
subprocess_args = {'stdout': subprocess.PIPE,
'stderr': subprocess.STDOUT,
'cwd': cwd}
proc = subprocess.Popen(args, **subprocess_args)
stdout, stderr = proc.communicate()
if proc.returncode != 0:
LOG.error('Command %s returned with exit status %s, output %s, '
'error %s', cmd, proc.returncode, stdout, stderr)
return stdout
def _set_quotas(self, project_id=None, load_balancer=20, listener=20,
pool=20, health_monitor=20, member=20):
if not project_id:
project_id = self.networks_client.tenant_id
body = {'quota': {
'load_balancer': load_balancer, 'listener': listener,
'pool': pool, 'health_monitor': health_monitor, 'member': member}}
return self.quotas_client.update_quotas(project_id, **body)
def _create_load_balancer_tree(self, ip_version=4, cleanup=True):
# TODO(ptoohill): remove or null out project ID when Octavia supports
# keystone auth and automatically populates it for us.
project_id = self.networks_client.tenant_id
create_members = self._create_members_kwargs(self.subnet['id'])
create_pool = {'project_id': project_id,
'lb_algorithm': 'ROUND_ROBIN',
'protocol': 'HTTP',
'members': create_members}
create_listener = {'project_id': project_id,
'protocol': 'HTTP',
'protocol_port': 80,
'default_pool': create_pool}
create_lb = {'project_id': project_id,
'vip': {'subnet_id': self.subnet['id']},
'listeners': [create_listener]}
# Set quotas back and finish the test
self._set_quotas(project_id=project_id)
self.load_balancer = (self.load_balancers_client
.create_load_balancer_graph(create_lb))
load_balancer_id = self.load_balancer['id']
if cleanup:
self.addCleanup(self._cleanup_load_balancer, load_balancer_id)
LOG.info('Waiting for lb status on create load balancer id: %s',
load_balancer_id)
self.load_balancer = self._wait_for_load_balancer_status(
load_balancer_id)
self.vip_ip = self.load_balancer['vip'].get('ip_address')
# if the ipv4 is used for lb, then fetch the right values from
# tempest.conf file
if ip_version == 4:
if (config.network.public_network_id and
not config.network.project_networks_reachable):
load_balancer = self.load_balancer
self._assign_floating_ip_to_lb_vip(load_balancer)
self.vip_ip = self.floating_ips[
load_balancer['id']][0]['floating_ip_address']
# Currently the ovs-agent is not enforcing security groups on the
# vip port - see https://bugs.launchpad.net/neutron/+bug/1163569
# However the linuxbridge-agent does, and it is necessary to add a
# security group with a rule that allows tcp port 80 to the vip port.
self.ports_client_admin.update_port(
self.load_balancer['vip']['port_id'],
security_groups=[self.security_group['id']])
def _create_members_kwargs(self, subnet_id=None):
"""Create one or more Members
In case there is only one server, create both members with the same ip
but with different ports to listen on.
"""
create_member_kwargs = []
for server_id, ip in six.iteritems(self.server_fixed_ips):
create_member_kwargs.append({'ip_address': ip,
'protocol_port': 80,
'weight': 50,
'subnet_id': subnet_id})
return create_member_kwargs

View File

@ -1,34 +0,0 @@
# Copyright 2017 Rackspace, US Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from tempest.common import utils
from tempest.lib import decorators
from octavia.tests.tempest.v2.scenario import base
class TestLoadBalancerQuota(base.BaseTestCase):
"""This tests attempts to exceed a set load balancer quota.
The following is the scenario outline:
1. Set the load balancer quota to one.
2. Create two load balancers, expecting the second create to fail
with a quota exceeded code.
"""
@utils.services('compute', 'network')
@decorators.skip_because(bug="1656110")
def test_load_balancer_quota(self):
self._set_quotas(project_id=None, load_balancer=1)
self._create_load_balancer_over_quota()

View File

@ -1,11 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,11 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,259 +0,0 @@
# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from oslo_config import cfg
from oslo_config import fixture as oslo_fixture
import oslo_messaging as messaging
from octavia.api.handlers.queue import producer
from octavia.api.v1.types import health_monitor
from octavia.api.v1.types import l7policy
from octavia.api.v1.types import l7rule
from octavia.api.v1.types import listener
from octavia.api.v1.types import load_balancer
from octavia.api.v1.types import member
from octavia.api.v1.types import pool
from octavia.common import data_models
from octavia.tests.unit import base
class TestProducer(base.TestRpc):
def setUp(self):
super(TestProducer, self).setUp()
self.mck_model = mock.Mock()
self.mck_model.id = '10'
conf = self.useFixture(oslo_fixture.Config(cfg.CONF))
conf.config(group="oslo_messaging", topic='OCTAVIA_PROV')
self.mck_client = mock.create_autospec(messaging.RPCClient)
mck_target = mock.patch(
'octavia.api.handlers.queue.producer.messaging.Target')
self.mck_client = mock.create_autospec(messaging.RPCClient)
mck_client = mock.patch(
'octavia.api.handlers.queue.producer.messaging.RPCClient',
return_value=self.mck_client)
mck_target.start()
mck_client.start()
self.addCleanup(mck_target.stop)
self.addCleanup(mck_client.stop)
def test_create_loadbalancer(self):
p = producer.LoadBalancerProducer()
p.create(self.mck_model)
kw = {'load_balancer_id': self.mck_model.id}
self.mck_client.cast.assert_called_once_with(
{}, 'create_load_balancer', **kw)
def test_delete_loadbalancer(self):
p = producer.LoadBalancerProducer()
p.delete(self.mck_model, False)
kw = {'load_balancer_id': self.mck_model.id,
'cascade': False}
self.mck_client.cast.assert_called_once_with(
{}, 'delete_load_balancer', **kw)
def test_failover_loadbalancer(self):
p = producer.LoadBalancerProducer()
p.failover(self.mck_model)
kw = {'load_balancer_id': self.mck_model.id}
self.mck_client.cast.assert_called_once_with(
{}, 'failover_load_balancer', **kw)
def test_failover_amphora(self):
p = producer.AmphoraProducer()
p.failover(self.mck_model)
kw = {'amphora_id': self.mck_model.id}
self.mck_client.cast.assert_called_once_with(
{}, 'failover_amphora', **kw)
def test_update_loadbalancer(self):
p = producer.LoadBalancerProducer()
lb = data_models.LoadBalancer(id=10)
lb_updates = load_balancer.LoadBalancerPUT(enabled=False)
p.update(lb, lb_updates)
kw = {'load_balancer_id': lb.id,
'load_balancer_updates': lb_updates.to_dict(render_unsets=False)}
self.mck_client.cast.assert_called_once_with(
{}, 'update_load_balancer', **kw)
def test_create_listener(self):
p = producer.ListenerProducer()
p.create(self.mck_model)
kw = {'listener_id': self.mck_model.id}
self.mck_client.cast.assert_called_once_with(
{}, 'create_listener', **kw)
def test_delete_listener(self):
p = producer.ListenerProducer()
p.delete(self.mck_model)
kw = {'listener_id': self.mck_model.id}
self.mck_client.cast.assert_called_once_with(
{}, 'delete_listener', **kw)
def test_update_listener(self):
p = producer.ListenerProducer()
listener_model = data_models.LoadBalancer(id=10)
listener_updates = listener.ListenerPUT(enabled=False)
p.update(listener_model, listener_updates)
kw = {'listener_id': listener_model.id,
'listener_updates': listener_updates.to_dict(
render_unsets=False)}
self.mck_client.cast.assert_called_once_with(
{}, 'update_listener', **kw)
def test_create_pool(self):
p = producer.PoolProducer()
p.create(self.mck_model)
kw = {'pool_id': self.mck_model.id}
self.mck_client.cast.assert_called_once_with(
{}, 'create_pool', **kw)
def test_delete_pool(self):
p = producer.PoolProducer()
p.delete(self.mck_model)
kw = {'pool_id': self.mck_model.id}
self.mck_client.cast.assert_called_once_with(
{}, 'delete_pool', **kw)
def test_update_pool(self):
p = producer.PoolProducer()
pool_model = data_models.Pool(id=10)
pool_updates = pool.PoolPUT(enabled=False)
p.update(pool_model, pool_updates)
kw = {'pool_id': pool_model.id,
'pool_updates': pool_updates.to_dict(render_unsets=False)}
self.mck_client.cast.assert_called_once_with(
{}, 'update_pool', **kw)
def test_create_healthmonitor(self):
p = producer.HealthMonitorProducer()
p.create(self.mck_model)
kw = {'health_monitor_id': self.mck_model.id}
self.mck_client.cast.assert_called_once_with(
{}, 'create_health_monitor', **kw)
def test_delete_healthmonitor(self):
p = producer.HealthMonitorProducer()
p.delete(self.mck_model)
kw = {'health_monitor_id': self.mck_model.id}
self.mck_client.cast.assert_called_once_with(
{}, 'delete_health_monitor', **kw)
def test_update_healthmonitor(self):
p = producer.HealthMonitorProducer()
hm = data_models.HealthMonitor(id=20, pool_id=10)
hm_updates = health_monitor.HealthMonitorPUT(enabled=False)
p.update(hm, hm_updates)
kw = {'health_monitor_id': hm.id,
'health_monitor_updates': hm_updates.to_dict(
render_unsets=False)}
self.mck_client.cast.assert_called_once_with(
{}, 'update_health_monitor', **kw)
def test_create_member(self):
p = producer.MemberProducer()
p.create(self.mck_model)
kw = {'member_id': self.mck_model.id}
self.mck_client.cast.assert_called_once_with(
{}, 'create_member', **kw)
def test_delete_member(self):
p = producer.MemberProducer()
p.delete(self.mck_model)
kw = {'member_id': self.mck_model.id}
self.mck_client.cast.assert_called_once_with(
{}, 'delete_member', **kw)
def test_update_member(self):
p = producer.MemberProducer()
member_model = data_models.Member(id=10)
member_updates = member.MemberPUT(enabled=False)
p.update(member_model, member_updates)
kw = {'member_id': member_model.id,
'member_updates': member_updates.to_dict(render_unsets=False)}
self.mck_client.cast.assert_called_once_with(
{}, 'update_member', **kw)
def test_batch_update_members(self):
p = producer.MemberProducer()
member_model = data_models.Member(id=10)
p.batch_update(old_ids=[9],
new_ids=[11],
updated_models=[member_model])
kw = {'old_member_ids': [9],
'new_member_ids': [11],
'updated_members': [member_model.to_dict()]}
self.mck_client.cast.assert_called_once_with(
{}, 'batch_update_members', **kw)
def test_create_l7policy(self):
p = producer.L7PolicyProducer()
p.create(self.mck_model)
kw = {'l7policy_id': self.mck_model.id}
self.mck_client.cast.assert_called_once_with(
{}, 'create_l7policy', **kw)
def test_delete_l7policy(self):
p = producer.L7PolicyProducer()
p.delete(self.mck_model)
kw = {'l7policy_id': self.mck_model.id}
self.mck_client.cast.assert_called_once_with(
{}, 'delete_l7policy', **kw)
def test_update_l7policy(self):
p = producer.L7PolicyProducer()
l7policy_model = data_models.L7Policy(id=10)
l7policy_updates = l7policy.L7PolicyPUT(enabled=False)
p.update(l7policy_model, l7policy_updates)
kw = {'l7policy_id': l7policy_model.id,
'l7policy_updates': l7policy_updates.to_dict(
render_unsets=False)}
self.mck_client.cast.assert_called_once_with(
{}, 'update_l7policy', **kw)
def test_create_l7rule(self):
p = producer.L7RuleProducer()
p.create(self.mck_model)
kw = {'l7rule_id': self.mck_model.id}
self.mck_client.cast.assert_called_once_with(
{}, 'create_l7rule', **kw)
def test_delete_l7rule(self):
p = producer.L7RuleProducer()
p.delete(self.mck_model)
kw = {'l7rule_id': self.mck_model.id}
self.mck_client.cast.assert_called_once_with(
{}, 'delete_l7rule', **kw)
def test_update_l7rule(self):
p = producer.L7RuleProducer()
l7rule_model = data_models.L7Rule(id=10)
l7rule_updates = l7rule.L7RulePUT(enabled=False)
p.update(l7rule_model, l7rule_updates)
kw = {'l7rule_id': l7rule_model.id,
'l7rule_updates': l7rule_updates.to_dict(render_unsets=False)}
self.mck_client.cast.assert_called_once_with(
{}, 'update_l7rule', **kw)

View File

@ -1,11 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,11 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,139 +0,0 @@
# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from wsme import exc
from wsme.rest import json as wsme_json
from wsme import types as wsme_types
from octavia.api.v1.types import health_monitor as hm_type
from octavia.common import constants
from octavia.tests.unit.api.common import base
class TestHealthMonitor(object):
_type = None
def test_invalid_type(self):
body = {"type": "http", "delay": 1, "timeout": 1, "fall_threshold": 1}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_delay(self):
body = {"type": constants.HEALTH_MONITOR_HTTP, "delay": "one",
"timeout": 1, "fall_threshold": 1}
self.assertRaises(ValueError, wsme_json.fromjson, self._type, body)
def test_invalid_timeout(self):
body = {"type": constants.HEALTH_MONITOR_HTTP, "delay": 1,
"timeout": "one", "fall_threshold": 1}
self.assertRaises(ValueError, wsme_json.fromjson, self._type, body)
def test_invalid_fall_threshold(self):
body = {"type": constants.HEALTH_MONITOR_HTTP, "delay": 1,
"timeout": 1, "fall_threshold": "one"}
self.assertRaises(ValueError, wsme_json.fromjson, self._type, body)
def test_invalid_rise_threshold(self):
body = {"type": constants.HEALTH_MONITOR_HTTP, "delay": 1,
"timeout": 1, "fall_threshold": 1, "rise_threshold": "one"}
self.assertRaises(ValueError, wsme_json.fromjson, self._type, body)
def test_invalid_http_method(self):
body = {"type": constants.HEALTH_MONITOR_HTTP, "delay": 1,
"timeout": 1, "fall_threshold": 1, "http_method": 1}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_url_path(self):
body = {"type": constants.HEALTH_MONITOR_HTTP, "delay": 1,
"timeout": 1, "fall_threshold": 1, "url_path": 1}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_expected_codes(self):
body = {"type": constants.HEALTH_MONITOR_HTTP, "delay": 1,
"timeout": 1, "fall_threshold": 1, "expected_codes": 1}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
class TestHealthMonitorPOST(base.BaseTypesTest, TestHealthMonitor):
_type = hm_type.HealthMonitorPOST
def test_health_monitor(self):
body = {"type": constants.HEALTH_MONITOR_HTTP, "delay": 1,
"timeout": 1, "fall_threshold": 1, "rise_threshold": 1}
hm = wsme_json.fromjson(self._type, body)
self.assertTrue(hm.enabled)
def test_type_mandatory(self):
body = {"delay": 80, "timeout": 1, "fall_threshold": 1,
"rise_threshold": 1}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_delay_mandatory(self):
body = {"type": constants.HEALTH_MONITOR_HTTP, "timeout": 1,
"fall_threshold": 1, "rise_threshold": 1}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_timeout_mandatory(self):
body = {"type": constants.HEALTH_MONITOR_HTTP, "delay": 1,
"fall_threshold": 1, "rise_threshold": 1}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_fall_threshold_mandatory(self):
body = {"type": constants.HEALTH_MONITOR_HTTP, "delay": 1,
"timeout": 1, "rise_threshold": 1}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_rise_threshold_mandatory(self):
body = {"type": constants.HEALTH_MONITOR_HTTP, "delay": 1,
"timeout": 1, "fall_threshold": 1}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_default_health_monitor_values(self):
# http_method = 'GET'
# url_path = '/'
# expected_codes = '200'
# The above are not required but should have the above example defaults
body = {"type": constants.HEALTH_MONITOR_HTTP, "delay": 1,
"timeout": 1, "fall_threshold": 1, "rise_threshold": 1}
hmpost = wsme_json.fromjson(self._type, body)
self.assertEqual('GET', hmpost.http_method)
self.assertEqual('/', hmpost.url_path)
self.assertEqual('200', hmpost.expected_codes)
def test_non_uuid_project_id(self):
body = {"type": constants.HEALTH_MONITOR_HTTP, "delay": 1,
"timeout": 1, "fall_threshold": 1, "rise_threshold": 1,
"project_id": "non-uuid"}
hm = wsme_json.fromjson(self._type, body)
self.assertEqual(hm.project_id, body['project_id'])
class TestHealthMonitorPUT(base.BaseTypesTest, TestHealthMonitor):
_type = hm_type.HealthMonitorPUT
def test_health_monitor(self):
body = {"http_method": constants.PROTOCOL_HTTPS}
hm = wsme_json.fromjson(self._type, body)
self.assertEqual(wsme_types.Unset, hm.enabled)

View File

@ -1,93 +0,0 @@
# Copyright 2016 Blue Box, an IBM Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from wsme import exc
from wsme.rest import json as wsme_json
from wsme import types as wsme_types
from octavia.api.v1.types import l7policy as l7policy_type
from octavia.common import constants
from octavia.tests.unit.api.common import base
class TestL7PolicyPOST(base.BaseTypesTest):
_type = l7policy_type.L7PolicyPOST
def test_l7policy(self):
body = {"action": constants.L7POLICY_ACTION_REJECT}
l7policy = wsme_json.fromjson(self._type, body)
self.assertEqual(constants.MAX_POLICY_POSITION, l7policy.position)
self.assertEqual(wsme_types.Unset, l7policy.redirect_url)
self.assertEqual(wsme_types.Unset, l7policy.redirect_pool_id)
self.assertTrue(l7policy.enabled)
def test_action_mandatory(self):
body = {}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_action(self):
body = {"action": "test"}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_with_redirect_url(self):
url = "http://www.example.com/"
body = {"action": constants.L7POLICY_ACTION_REDIRECT_TO_URL,
"redirect_url": url}
l7policy = wsme_json.fromjson(self._type, body)
self.assertEqual(constants.MAX_POLICY_POSITION, l7policy.position)
self.assertEqual(url, l7policy.redirect_url)
self.assertEqual(wsme_types.Unset, l7policy.redirect_pool_id)
def test_invalid_position(self):
body = {"action": constants.L7POLICY_ACTION_REJECT,
"position": "notvalid"}
self.assertRaises(ValueError, wsme_json.fromjson, self._type,
body)
def test_invalid_enabled(self):
body = {"action": constants.L7POLICY_ACTION_REJECT,
"enabled": "notvalid"}
self.assertRaises(ValueError, wsme_json.fromjson, self._type,
body)
def test_invalid_url(self):
body = {"action": constants.L7POLICY_ACTION_REDIRECT_TO_URL,
"redirect_url": "notvalid"}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
class TestL7PolicyPUT(base.BaseTypesTest):
_type = l7policy_type.L7PolicyPUT
def test_l7policy(self):
body = {"action": constants.L7POLICY_ACTION_REJECT,
"position": 0}
l7policy = wsme_json.fromjson(self._type, body)
self.assertEqual(0, l7policy.position)
self.assertEqual(wsme_types.Unset, l7policy.redirect_url)
self.assertEqual(wsme_types.Unset, l7policy.redirect_pool_id)
def test_invalid_position(self):
body = {"position": "test"}
self.assertRaises(ValueError, wsme_json.fromjson, self._type, body)
def test_invalid_action(self):
body = {"action": "test"}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)

View File

@ -1,109 +0,0 @@
# Copyright 2016 Blue Box, an IBM Company
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from wsme import exc
from wsme.rest import json as wsme_json
from wsme import types as wsme_types
from octavia.api.v1.types import l7rule as l7rule_type
from octavia.common import constants
from octavia.tests.unit.api.common import base
class TestL7RulePOST(base.BaseTypesTest):
_type = l7rule_type.L7RulePOST
def test_l7rule(self):
body = {"type": constants.L7RULE_TYPE_PATH,
"compare_type": constants.L7RULE_COMPARE_TYPE_STARTS_WITH,
"value": "/api"}
l7rule = wsme_json.fromjson(self._type, body)
self.assertEqual(wsme_types.Unset, l7rule.key)
self.assertFalse(l7rule.invert)
def test_type_mandatory(self):
body = {"compare_type": constants.L7RULE_COMPARE_TYPE_STARTS_WITH,
"value": "/api"}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_compare_type_mandatory(self):
body = {"type": constants.L7RULE_TYPE_PATH,
"value": "/api"}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_value_mandatory(self):
body = {"type": constants.L7RULE_TYPE_PATH,
"compare_type": constants.L7RULE_COMPARE_TYPE_STARTS_WITH}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_type(self):
body = {"type": "notvalid",
"compare_type": constants.L7RULE_COMPARE_TYPE_STARTS_WITH,
"value": "/api"}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_compare_type(self):
body = {"type": constants.L7RULE_TYPE_PATH,
"compare_type": "notvalid",
"value": "/api"}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_invert(self):
body = {"type": constants.L7RULE_TYPE_PATH,
"compare_type": constants.L7RULE_COMPARE_TYPE_STARTS_WITH,
"value": "/api",
"invert": "notvalid"}
self.assertRaises(ValueError, wsme_json.fromjson, self._type,
body)
class TestL7RulePUT(base.BaseTypesTest):
_type = l7rule_type.L7RulePUT
def test_l7rule(self):
body = {"type": constants.L7RULE_TYPE_PATH,
"compare_type": constants.L7RULE_COMPARE_TYPE_STARTS_WITH,
"value": "/api"}
l7rule = wsme_json.fromjson(self._type, body)
self.assertEqual(wsme_types.Unset, l7rule.key)
self.assertFalse(l7rule.invert)
def test_invalid_type(self):
body = {"type": "notvalid",
"compare_type": constants.L7RULE_COMPARE_TYPE_STARTS_WITH,
"value": "/api"}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_compare_type(self):
body = {"type": constants.L7RULE_TYPE_PATH,
"compare_type": "notvalid",
"value": "/api"}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_invert(self):
body = {"type": constants.L7RULE_TYPE_PATH,
"compare_type": constants.L7RULE_COMPARE_TYPE_STARTS_WITH,
"value": "/api",
"invert": "notvalid"}
self.assertRaises(ValueError, wsme_json.fromjson, self._type,
body)

View File

@ -1,101 +0,0 @@
# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_utils import uuidutils
from wsme import exc
from wsme.rest import json as wsme_json
from wsme import types as wsme_types
from octavia.api.v1.types import listener as lis_type
from octavia.common import constants
from octavia.tests.unit.api.common import base
class TestListener(object):
_type = None
def test_invalid_name(self):
body = {"protocol": constants.PROTOCOL_HTTP, "protocol_port": 80,
"name": 0}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_description(self):
body = {"protocol": constants.PROTOCOL_HTTP, "protocol_port": 80,
"description": 0}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_enabled(self):
body = {"protocol": constants.PROTOCOL_HTTP, "protocol_port": 80,
"enabled": "notvalid"}
self.assertRaises(ValueError, wsme_json.fromjson, self._type,
body)
def test_invalid_protocol(self):
body = {"protocol": "http", "protocol_port": 80}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_protocol_port(self):
body = {"protocol": constants.PROTOCOL_HTTP, "protocol_port": "test"}
self.assertRaises(ValueError, wsme_json.fromjson, self._type, body)
def test_invalid_connection_limit(self):
body = {"protocol": constants.PROTOCOL_HTTP, "protocol_port": 80,
"connection_limit": "test"}
self.assertRaises(ValueError, wsme_json.fromjson, self._type, body)
class TestListenerPOST(base.BaseTypesTest, TestListener):
_type = lis_type.ListenerPOST
def test_listener(self):
body = {"name": "test", "description": "test", "connection_limit": 10,
"protocol": constants.PROTOCOL_HTTP, "protocol_port": 80,
"default_pool_id": uuidutils.generate_uuid()}
listener = wsme_json.fromjson(self._type, body)
self.assertTrue(listener.enabled)
def test_protocol_mandatory(self):
body = {"protocol_port": 80}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_protocol_port_mandatory(self):
body = {"protocol": constants.PROTOCOL_HTTP}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_non_uuid_project_id(self):
body = {"name": "test", "description": "test", "connection_limit": 10,
"protocol": constants.PROTOCOL_HTTP, "protocol_port": 80,
"default_pool_id": uuidutils.generate_uuid(),
"project_id": "non-uuid"}
listener = wsme_json.fromjson(self._type, body)
self.assertEqual(listener.project_id, body['project_id'])
class TestListenerPUT(base.BaseTypesTest, TestListener):
_type = lis_type.ListenerPUT
def test_listener(self):
body = {"name": "test", "description": "test", "connection_limit": 10,
"protocol": constants.PROTOCOL_HTTP, "protocol_port": 80,
"default_pool_id": uuidutils.generate_uuid()}
listener = wsme_json.fromjson(self._type, body)
self.assertEqual(wsme_types.Unset, listener.enabled)

View File

@ -1,108 +0,0 @@
# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_utils import uuidutils
from wsme import exc
from wsme.rest import json as wsme_json
from wsme import types as wsme_types
from octavia.api.v1.types import load_balancer as lb_type
from octavia.tests.unit.api.common import base
class TestLoadBalancer(object):
_type = None
def test_load_balancer(self):
body = {"name": "test_name", "description": "test_description",
"vip": {}}
lb = wsme_json.fromjson(self._type, body)
self.assertTrue(lb.enabled)
def test_invalid_name(self):
body = {"name": 0}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_name_length(self):
body = {"name": "x" * 256}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_description(self):
body = {"description": 0}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_description_length(self):
body = {"name": "x" * 256}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_enabled(self):
body = {"enabled": "notvalid"}
self.assertRaises(ValueError, wsme_json.fromjson, self._type,
body)
class TestLoadBalancerPOST(base.BaseTypesTest, TestLoadBalancer):
_type = lb_type.LoadBalancerPOST
def test_vip_mandatory(self):
body = {"name": "test"}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_non_uuid_project_id(self):
body = {"name": "test_name", "description": "test_description",
"vip": {}, "project_id": "non-uuid"}
lb = wsme_json.fromjson(self._type, body)
self.assertEqual(lb.project_id, body['project_id'])
class TestLoadBalancerPUT(base.BaseTypesTest, TestLoadBalancer):
_type = lb_type.LoadBalancerPUT
def test_load_balancer(self):
body = {"name": "test_name", "description": "test_description"}
lb = wsme_json.fromjson(self._type, body)
self.assertEqual(wsme_types.Unset, lb.enabled)
class TestVip(base.BaseTypesTest):
_type = lb_type.VIP
def test_vip(self):
body = {"ip_address": "10.0.0.1",
"port_id": uuidutils.generate_uuid()}
wsme_json.fromjson(self._type, body)
def test_invalid_ip_address(self):
body = {"ip_address": uuidutils.generate_uuid()}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_port_id(self):
body = {"port_id": "invalid_uuid"}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_subnet_id(self):
body = {"subnet_id": "invalid_uuid"}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)

View File

@ -1,93 +0,0 @@
# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from wsme import exc
from wsme.rest import json as wsme_json
from wsme import types as wsme_types
from octavia.api.v1.types import member as member_type
from octavia.tests.unit.api.common import base
class TestMemberPOST(base.BaseTypesTest):
_type = member_type.MemberPOST
def test_member(self):
body = {"ip_address": "10.0.0.1", "protocol_port": 80}
member = wsme_json.fromjson(self._type, body)
self.assertTrue(member.enabled)
self.assertEqual(1, member.weight)
self.assertEqual(wsme_types.Unset, member.subnet_id)
def test_address_mandatory(self):
body = {}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_protocol_mandatory(self):
body = {}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_address(self):
body = {"ip_address": "test", "protocol_port": 443}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_subnet_id(self):
body = {"ip_address": "10.0.0.1", "protocol_port": 443,
"subnet_id": "invalid_uuid"}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_enabled(self):
body = {"ip_address": "10.0.0.1", "protocol_port": 443,
"enabled": "notvalid"}
self.assertRaises(ValueError, wsme_json.fromjson, self._type,
body)
def test_invalid_protocol_port(self):
body = {"ip_address": "10.0.0.1", "protocol_port": "test"}
self.assertRaises(ValueError, wsme_json.fromjson, self._type, body)
def test_invalid_weight(self):
body = {"ip_address": "10.0.0.1", "protocol_port": 443,
"weight": "test"}
self.assertRaises(ValueError, wsme_json.fromjson, self._type, body)
def test_non_uuid_project_id(self):
body = {"ip_address": "10.0.0.1", "protocol_port": 80,
"project_id": "non-uuid"}
member = wsme_json.fromjson(self._type, body)
self.assertEqual(member.project_id, body['project_id'])
class TestMemberPUT(base.BaseTypesTest):
_type = member_type.MemberPUT
def test_member(self):
body = {"protocol_port": 80}
member = wsme_json.fromjson(self._type, body)
self.assertEqual(wsme_types.Unset, member.weight)
self.assertEqual(wsme_types.Unset, member.enabled)
def test_invalid_protocol_port(self):
body = {"protocol_port": "test"}
self.assertRaises(ValueError, wsme_json.fromjson, self._type, body)
def test_invalid_weight(self):
body = {"weight": "test"}
self.assertRaises(ValueError, wsme_json.fromjson, self._type, body)

View File

@ -1,134 +0,0 @@
# Copyright 2014 Rackspace
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from wsme import exc
from wsme.rest import json as wsme_json
from wsme import types as wsme_types
from octavia.api.v1.types import pool as pool_type
from octavia.common import constants
from octavia.tests.unit.api.common import base
class TestSessionPersistence(object):
_type = None
def test_session_persistence(self):
body = {"type": constants.SESSION_PERSISTENCE_HTTP_COOKIE}
sp = wsme_json.fromjson(self._type, body)
self.assertIsNotNone(sp.type)
def test_invalid_type(self):
body = {"type": "source_ip"}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_cookie_name(self):
body = {"type": constants.SESSION_PERSISTENCE_HTTP_COOKIE,
"cookie_name": 10}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
class TestPoolPOST(base.BaseTypesTest):
_type = pool_type.PoolPOST
def test_pool(self):
body = {"protocol": constants.PROTOCOL_HTTP,
"lb_algorithm": constants.LB_ALGORITHM_ROUND_ROBIN}
pool = wsme_json.fromjson(self._type, body)
self.assertTrue(pool.enabled)
def test_protocol_mandatory(self):
body = {"lb_algorithm": constants.LB_ALGORITHM_ROUND_ROBIN}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_lb_algorithm_mandatory(self):
body = {"protocol": constants.PROTOCOL_HTTP}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_name(self):
body = {"name": 10, "protocol": constants.PROTOCOL_HTTP,
"lb_algorithm": constants.LB_ALGORITHM_ROUND_ROBIN}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_description(self):
body = {"description": 10, "protocol": constants.PROTOCOL_HTTP,
"lb_algorithm": constants.LB_ALGORITHM_ROUND_ROBIN}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_protocol(self):
body = {"protocol": "http",
"lb_algorithm": constants.LB_ALGORITHM_ROUND_ROBIN}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_lb_algorithm(self):
body = {"protocol": constants.PROTOCOL_HTTP,
"lb_algorithm": "source_ip"}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_non_uuid_project_id(self):
body = {"protocol": constants.PROTOCOL_HTTP,
"lb_algorithm": constants.LB_ALGORITHM_ROUND_ROBIN,
"project_id": "non-uuid"}
pool = wsme_json.fromjson(self._type, body)
self.assertEqual(pool.project_id, body['project_id'])
class TestPoolPUT(base.BaseTypesTest):
_type = pool_type.PoolPUT
def test_pool(self):
body = {"name": "test_name"}
pool = wsme_json.fromjson(self._type, body)
self.assertEqual(wsme_types.Unset, pool.enabled)
def test_invalid_name(self):
body = {"name": 10}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_description(self):
body = {"description": 10}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
def test_invalid_lb_algorithm(self):
body = {"lb_algorithm": "source_ip"}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
class TestSessionPersistencePOST(base.BaseTypesTest, TestSessionPersistence):
_type = pool_type.SessionPersistencePOST
def test_type_mandatory(self):
body = {"cookie_name": "test_name"}
self.assertRaises(exc.InvalidInput, wsme_json.fromjson, self._type,
body)
class TestSessionPersistencePUT(base.BaseTypesTest, TestSessionPersistence):
_type = pool_type.SessionPersistencePUT

View File

@ -17,7 +17,6 @@ from oslo_config import cfg
from oslo_config import fixture as oslo_fixture
from oslo_utils import uuidutils
from octavia.api.v1.types import load_balancer as lb_types
import octavia.common.constants as constants
import octavia.common.exceptions as exceptions
import octavia.common.validate as validate
@ -299,53 +298,50 @@ class TestValidations(base.TestCase):
self.assertEqual(validate.subnet_exists(subnet_id), subnet)
def test_network_exists_with_bad_network(self):
vip = lb_types.VIP()
vip.network_id = uuidutils.generate_uuid()
network_id = uuidutils.generate_uuid()
with mock.patch(
'octavia.common.utils.get_network_driver') as net_mock:
net_mock.return_value.get_network = mock.Mock(
side_effect=network_base.NetworkNotFound('Network not found'))
self.assertRaises(
exceptions.InvalidSubresource,
validate.network_exists_optionally_contains_subnet, vip)
validate.network_exists_optionally_contains_subnet, network_id)
def test_network_exists_with_valid_network(self):
vip = lb_types.VIP()
vip.network_id = uuidutils.generate_uuid()
network = network_models.Network(id=vip.network_id)
network_id = uuidutils.generate_uuid()
network = network_models.Network(id=network_id)
with mock.patch(
'octavia.common.utils.get_network_driver') as net_mock:
net_mock.return_value.get_network.return_value = network
self.assertEqual(
validate.network_exists_optionally_contains_subnet(vip),
validate.network_exists_optionally_contains_subnet(network_id),
network)
def test_network_exists_with_valid_subnet(self):
vip = lb_types.VIP()
vip.network_id = uuidutils.generate_uuid()
vip.subnet_id = uuidutils.generate_uuid()
network_id = uuidutils.generate_uuid()
subnet_id = uuidutils.generate_uuid()
network = network_models.Network(
id=vip.network_id,
subnets=[vip.subnet_id])
id=network_id,
subnets=[subnet_id])
with mock.patch(
'octavia.common.utils.get_network_driver') as net_mock:
net_mock.return_value.get_network.return_value = network
self.assertEqual(
validate.network_exists_optionally_contains_subnet(vip),
validate.network_exists_optionally_contains_subnet(
network_id, subnet_id),
network)
def test_network_exists_with_bad_subnet(self):
vip = lb_types.VIP()
vip.network_id = uuidutils.generate_uuid()
vip.subnet_id = uuidutils.generate_uuid()
network = network_models.Network(id=vip.network_id)
network_id = uuidutils.generate_uuid()
subnet_id = uuidutils.generate_uuid()
network = network_models.Network(id=network_id)
with mock.patch(
'octavia.common.utils.get_network_driver') as net_mock:
net_mock.return_value.get_network.return_value = network
self.assertRaises(
exceptions.InvalidSubresource,
validate.network_exists_optionally_contains_subnet,
vip.network_id, vip.subnet_id)
network_id, subnet_id)
def test_network_allowed_by_config(self):
net_id1 = uuidutils.generate_uuid()

Some files were not shown because too many files have changed in this diff Show More