Add Pool Manager to handle subports

This patch introduces a ports pool manager that runs as an http server
thread belonging to the VIF handler. This Manager handles the requests
to populate and/or empty given subport pools so that they can be easily
managed.

It also includes a client and documentation to test/use the new functionality.

Implements: blueprint kuryr-manager-cli-tool
Change-Id: I495c0ca3ed997ab9da1763d8a3e60bbf7ac618b9
This commit is contained in:
Luis Tomas Bolivar 2017-08-29 08:43:34 +02:00
parent bc8be6506d
commit 38697ddbeb
15 changed files with 791 additions and 0 deletions

View File

@ -0,0 +1,258 @@
Subport pools management tool
=============================
This tool makes it easier to deal with subports pools. It allows to populate
a given amount of subports at the specified pools (i.e., at the VM trunks), as
well as to free the unused ones.
The first step to perform is to enable the pool manager by adding this to
`/etc/kuryr/kuryr.conf`::
[kubernetes]
enable_manager = True
If the environment has been deployed with devstack, the socket file directory
will have been created automatically. However, if that is not the case, you
need to create the directory for the socket file with the right permissions.
If no other path is specified, the default location for the socket file is:
`/run/kuryr/kuryr_manage.sock`
Hence, you need to create that directory and give it read/write access to the
user who is running the kuryr-kubernetes.service, for instance::
$ sudo mkdir -p /run/kuryr
$ sudo chown stack:stack /run/kuryr
Finally, restart kuryr-k8s-controller::
$ sudo systemctl restart devstack@kuryr-kubernetes.service
Populate subport pools for nested environment
---------------------------------------------
Once the nested environment is up and running, and the pool manager has been
started, we can populate the pools, i.e., the trunk ports in used by the
overcloud VMs, with subports. From the `undercloud` we just need to make use
of the subports.py tool.
To obtain information about the tool options::
$ python contrib/pools-management/subports.py -h
usage: subports.py [-h] {create,free} ...
Tool to create/free subports from the subport pools
positional arguments:
{create,free} commands
create Populate the pool(s) with subports
free Remove unused subports from the pools
optional arguments:
-h, --help show this help message and exit
And to obtain information about the create subcommand::
$ python contrib/pools-management/subports.py create -h
usage: subports.py create [-h] --trunks SUBPORTS [SUBPORTS ...] [-n NUM] [-t TIMEOUT]
optional arguments:
-h, --help show this help message and exit
--trunks SUBPORTS [SUBPORTS ...]
list of trunk IPs where subports will be added
-n NUM, --num-ports NUM
number of subports to be created per pool
-t TIMEOUT, --timeout TIMEOUT
set timeout for operation. Default is 180 sec
Then, we can check the existing (overcloud) VMs to use their (trunk) IPs to
later populate their respective pool::
$ openstack server list -f value -c Networks
net0-10.0.4.5
net0=10.0.4.6, 172.24.4.5
As it can be seen, the second VM has also a floating ip associated, but we
only need to use the one belonging to `net0`. If we want to create and attach
a subport to the 10.0.4.5 trunk, and the respective pool, we just need to do::
$ python contrib/pools-management/subports.py create --trunks 10.0.4.5
As the number of ports to create is not specified, it only creates 1 subport
as this is the default value. We can check the result of this command with::
# Checking the subport named `available-port` has been created
$ openstack port list | grep available-port
| 1de77073-7127-4c39-a47b-cef15f98849c | available-port| fa:16:3e:64:7d:90 | ip_address='10.0.0.70', subnet_id='c3a8feb0-62b5-4b53-9235-af1ca93c2571' | ACTIVE |
# Checking the subport is attached to the VM trunk
$ openstack network trunk show trunk1
+-----------------+--------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------+--------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| created_at | 2017-08-28T15:06:54Z |
| description | |
| id | 9048c109-c1aa-4a41-9508-71b2ba98f3b0 |
| name | trunk1 |
| port_id | 4180a2e5-e184-424a-93d4-54b48490f50d |
| project_id | a05f6ec0abd04cba80cd160f8baaac99 |
| revision_number | 43 |
| status | ACTIVE |
| sub_ports | port_id='1de77073-7127-4c39-a47b-cef15f98849c', segmentation_id='3934', segmentation_type='vlan' |
| tags | [] |
| tenant_id | a05f6ec0abd04cba80cd160f8baaac99 |
| updated_at | 2017-08-29T06:12:39Z |
+-----------------+--------------------------------------------------------------------------------------------------+
It can be seen that the port with id `1de77073-7127-4c39-a47b-cef15f98849c`
has been attached to `trunk1`.
Similarly, we can add subport to different pools by including several IPs at
the `--trunks` option, and we can also modify the amount of subports created
per pool with the `--num` option::
$ python contrib/pools-management/subports.py create --trunks 10.0.4.6 10.0.4.5 --num 3
This command will create 6 subports in total, 3 at trunk 10.0.4.5 and another
3 at trunk 10.0.4.6. So, to check the result of this command, as before::
$ openstack port list | grep available-port
| 1de77073-7127-4c39-a47b-cef15f98849c | available-port | fa:16:3e:64:7d:90 | ip_address='10.0.0.70', subnet_id='c3a8feb0-62b5-4b53-9235-af1ca93c2571' | ACTIVE |
| 52e52281-4692-45e9-935e-db77de44049a | available-port | fa:16:3e:0b:45:f6 | ip_address='10.0.0.73', subnet_id='c3a8feb0-62b5-4b53-9235-af1ca93c2571' | ACTIVE |
| 71245983-e15e-4ae8-9425-af255b54921b | available-port | fa:16:3e:e5:2f:90 | ip_address='10.0.0.68', subnet_id='c3a8feb0-62b5-4b53-9235-af1ca93c2571' | ACTIVE |
| b6a8aa34-feef-42d7-b7ce-f9c33ac499ca | available-port | fa:16:3e:0c:8c:b0 | ip_address='10.0.0.65', subnet_id='c3a8feb0-62b5-4b53-9235-af1ca93c2571' | ACTIVE |
| bee0cb3e-8d83-4942-8cdd-fc091b6e6058 | available-port | fa:16:3e:c2:0a:c6 | ip_address='10.0.0.74', subnet_id='c3a8feb0-62b5-4b53-9235-af1ca93c2571' | ACTIVE |
| c2d7b5c9-606d-4499-9981-0f94ec94f7e1 | available-port | fa:16:3e:73:89:d2 | ip_address='10.0.0.67', subnet_id='c3a8feb0-62b5-4b53-9235-af1ca93c2571' | ACTIVE |
| cb42940f-40c0-4e01-aa40-f3e9c5f6743f | available-port | fa:16:3e:49:73:ca | ip_address='10.0.0.66', subnet_id='c3a8feb0-62b5-4b53-9235-af1ca93c2571' | ACTIVE |
$ openstack network trunk show trunk0
+-----------------+--------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------+--------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| created_at | 2017-08-25T07:28:11Z |
| description | |
| id | c730ff56-69c2-4540-b3d4-d2978007236d |
| name | trunk0 |
| port_id | ad1b8e91-0698-473d-a2f2-d123e8a0af45 |
| project_id | a05f6ec0abd04cba80cd160f8baaac99 |
| revision_number | 381 |
| status | ACTIVE |
| sub_port | port_id='bee0cb3e-8d83-4942-8cdd-fc091b6e6058', segmentation_id='875', segmentation_type='vlan' |
| | port_id='71245983-e15e-4ae8-9425-af255b54921b', segmentation_id='1446', segmentation_type='vlan' |
| | port_id='b6a8aa34-feef-42d7-b7ce-f9c33ac499ca', segmentation_id='1652', segmentation_type='vlan' |
| tags | [] |
| tenant_id | a05f6ec0abd04cba80cd160f8baaac99 |
| updated_at | 2017-08-29T06:19:24Z |
+-----------------+--------------------------------------------------------------------------------------------------+
$ openstack network trunk show trunk1
+-----------------+--------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------+--------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| created_at | 2017-08-28T15:06:54Z |
| description | |
| id | 9048c109-c1aa-4a41-9508-71b2ba98f3b0 |
| name | trunk1 |
| port_id | 4180a2e5-e184-424a-93d4-54b48490f50d |
| project_id | a05f6ec0abd04cba80cd160f8baaac99 |
| revision_number | 46 |
| status | ACTIVE |
| sub_ports | port_id='c2d7b5c9-606d-4499-9981-0f94ec94f7e1', segmentation_id='289', segmentation_type='vlan' |
| | port_id='cb42940f-40c0-4e01-aa40-f3e9c5f6743f', segmentation_id='1924', segmentation_type='vlan' |
| | port_id='52e52281-4692-45e9-935e-db77de44049a', segmentation_id='3866', segmentation_type='vlan' |
| | port_id='1de77073-7127-4c39-a47b-cef15f98849c', segmentation_id='3934', segmentation_type='vlan' |
| tags | [] |
| tenant_id | a05f6ec0abd04cba80cd160f8baaac99 |
| updated_at | 2017-08-29T06:19:28Z |
+-----------------+--------------------------------------------------------------------------------------------------+
We can see that now we have 7 subports, 3 of them attached to `trunk0` and 4
(1 + 3) attached to `trunk1`.
After that, if we create a new pod, we can see that the pre-created subports
are being used::
$ kubectl run demo --image=celebdor/kuryr-demo
$ kubectl scale deploy/demo --replicas=2
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-2293951457-0l35q 1/1 Running 0 8s
demo-2293951457-nlghf 1/1 Running 0 17s
$ openstack port list | grep demo
| 71245983-e15e-4ae8-9425-af255b54921b | demo-2293951457-0l35q | fa:16:3e:e5:2f:90 | ip_address='10.0.0.68', subnet_id='c3a8feb0-62b5-4b53-9235-af1ca93c2571' | ACTIVE |
| b6a8aa34-feef-42d7-b7ce-f9c33ac499ca | demo-2293951457-nlghf | fa:16:3e:0c:8c:b0 | ip_address='10.0.0.65', subnet_id='c3a8feb0-62b5-4b53-9235-af1ca93c2571' | ACTIVE |
Free pools for nested environment
---------------------------------
In addition to the create subcommand, there is a `free` command available that
allows to either remove the available ports at a given pool (i.e., VM trunk),
or in all of them::
$ python contrib/pools-management/subports.py free -h
usage: subports.py free [-h] [--trunks SUBPORTS [SUBPORTS ...]] [-t TIMEOUT]
optional arguments:
-h, --help show this help message and exit
--trunks SUBPORTS [SUBPORTS ...]
list of trunk IPs where subports will be freed
-t TIMEOUT, --timeout TIMEOUT
set timeout for operation. Default is 180 sec
Following from the previous example, we can remove the available-ports
attached to a give pool, e.g.::
$ python contrib/pools-management/subports.py free --trunks 10.0.4.5
$ openstack network trunk show trunk1
+-----------------+--------------------------------------+
| Field | Value |
+-----------------+--------------------------------------+
| admin_state_up | UP |
| created_at | 2017-08-28T15:06:54Z |
| description | |
| id | 9048c109-c1aa-4a41-9508-71b2ba98f3b0 |
| name | trunk1 |
| port_id | 4180a2e5-e184-424a-93d4-54b48490f50d |
| project_id | a05f6ec0abd04cba80cd160f8baaac99 |
| revision_number | 94 |
| status | ACTIVE |
| sub_ports | |
| tags | [] |
| tenant_id | a05f6ec0abd04cba80cd160f8baaac99 |
| updated_at | 2017-08-29T06:40:18Z |
+-----------------+--------------------------------------+
Or from all the pools at once::
$ python contrib/pools-management/subports.py free
$ openstack port list | grep available-port
$ # returns nothing
Without the script
------------------
Note the same can be done without using this script, by directly calling the
REST API with curl::
# To populate the pool
$ curl --unix-socket /run/kuryr/kuryr_manage.sock http://localhost/populatePool -H "Content-Type: application/json" -X POST -d '{"trunks": ["10.0.4.6"], "num_ports": 3}'
# To free the pool
$ curl --unix-socket /run/kuryr/kuryr_manage.sock http://localhost/freePool -H "Content-Type: application/json" -X POST -d '{"trunks": ["10.0.4.6"]}'

View File

@ -0,0 +1,121 @@
# Copyright 2017 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import socket
from oslo_serialization import jsonutils
from six.moves import http_client as httplib
from kuryr_kubernetes import constants
class UnixDomainHttpConnection(httplib.HTTPConnection):
def __init__(self, path, timeout):
httplib.HTTPConnection.__init__(
self, "localhost", timeout=timeout)
self.__unix_socket_path = path
self.timeout = timeout
def connect(self):
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.settimeout(self.timeout)
sock.connect(self.__unix_socket_path)
self.sock = sock
def create_subports(num_ports, trunk_ips, timeout=180):
method = 'POST'
body = jsonutils.dumps({"trunks": trunk_ips, "num_ports": num_ports})
headers = {'Content-Type': 'application/json', 'Connection': 'close'}
headers['Content-Length'] = len(body)
path = 'http://localhost{0}'.format(constants.VIF_POOL_POPULATE)
socket_path = constants.MANAGER_SOCKET_FILE
conn = UnixDomainHttpConnection(socket_path, timeout)
conn.request(method, path, body=body, headers=headers)
resp = conn.getresponse()
print(resp.read())
def delete_subports(trunk_ips, timeout=180):
method = 'POST'
body = jsonutils.dumps({"trunks": trunk_ips})
headers = {'Content-Type': 'application/json', 'Connection': 'close'}
headers['Content-Length'] = len(body)
path = 'http://localhost{0}'.format(constants.VIF_POOL_FREE)
socket_path = constants.MANAGER_SOCKET_FILE
conn = UnixDomainHttpConnection(socket_path, timeout)
conn.request(method, path, body=body, headers=headers)
resp = conn.getresponse()
print(resp.read())
def _get_parser():
parser = argparse.ArgumentParser(
description='Tool to create/free subports from the subports pool')
subparser = parser.add_subparsers(help='commands', dest='command')
create_ports_parser = subparser.add_parser(
'create',
help='Populate the pool(s) with subports')
create_ports_parser.add_argument(
'--trunks',
help='list of trunk IPs where subports will be added',
nargs='+',
dest='subports',
required=True)
create_ports_parser.add_argument(
'-n', '--num-ports',
help='number of subports to be created per pool.',
dest='num',
default=1,
type=int)
create_ports_parser.add_argument(
'-t', '--timeout',
help='set timeout for operation. Default is 180 sec',
dest='timeout',
default=180,
type=int)
delete_ports_parser = subparser.add_parser(
'free',
help='Remove unused subports from the pools')
delete_ports_parser.add_argument(
'--trunks',
help='list of trunk IPs where subports will be freed',
nargs='+',
dest='subports')
delete_ports_parser.add_argument(
'-t', '--timeout',
help='set timeout for operation. Default is 180 sec',
dest='timeout',
default=180,
type=int)
return parser
def main():
"""Parse options and call the appropriate class/method."""
parser = _get_parser()
args = parser.parse_args()
if args.command == 'create':
create_subports(args.num, args.subports, args.timeout)
elif args.command == 'free':
delete_subports(args.subports, args.timeout)
if __name__ == '__main__':
main()

View File

@ -63,3 +63,11 @@ KURYR_VIF_POOL_DRIVER=nested
# KURYR_VIF_POOL_MAX=0
# KURYR_VIF_POOL_BATCH=10
# KURYR_VIF_POOL_UPDATE_FREQ=20
# Kuryr VIF Pool Manager
# ======================
#
# Uncomment the next line to enable the pool manager. Note it requires the
# nested-vlan pod vif driver, as well as the ports pool being enabled and
# configured with the nested driver
# KURYR_VIF_POOL_MANAGER=True

View File

@ -215,6 +215,14 @@ enable_service kuryr-kubernetes
# KURYR_VIF_POOL_BATCH=10
# KURYR_VIF_POOL_UPDATE_FREQ=20
# Kuryr VIF Pool Manager
# ======================
#
# Uncomment the next line to enable the pool manager. Note it requires the
# nested-vlan pod vif driver, as well as the ports pool being enabled and
# configured with the nested driver
# KURYR_VIF_POOL_MANAGER=True
# Increase Octavia amphorae timeout so that the first LB amphora has time to
# build and boot
if [[ "$KURYR_K8S_LBAAS_USE_OCTAVIA" == "True" ]]; then

View File

@ -41,6 +41,7 @@ EOF
}
function configure_kuryr {
local dir
sudo install -d -o "$STACK_USER" "$KURYR_CONFIG_DIR"
"${KURYR_HOME}/tools/generate_config_file_samples.sh"
sudo install -o "$STACK_USER" -m 640 -D "${KURYR_HOME}/etc/kuryr.conf.sample" \
@ -73,6 +74,16 @@ function configure_kuryr {
iniset "$KURYR_CONFIG" vif_pool ports_pool_max "$KURYR_VIF_POOL_MAX"
iniset "$KURYR_CONFIG" vif_pool ports_pool_batch "$KURYR_VIF_POOL_BATCH"
iniset "$KURYR_CONFIG" vif_pool ports_pool_update_frequency "$KURYR_VIF_POOL_UPDATE_FREQ"
if [ "$KURYR_VIF_POOL_MANAGER" ]; then
iniset "$KURYR_CONFIG" kubernetes enable_manager "$KURYR_VIF_POOL_MANAGER"
dir=`iniget "$KURYR_CONFIG" vif_pool manager_sock_file`
if [[ -z $dir ]]; then
dir="/run/kuryr/kuryr_manage.sock"
fi
dir=`dirname $dir`
sudo mkdir -p $dir
fi
fi
fi
}

View File

@ -62,3 +62,6 @@ KURYR_VIF_POOL_MIN=${KURYR_VIF_POOL_MIN:-5}
KURYR_VIF_POOL_MAX=${KURYR_VIF_POOL_MAX:-0}
KURYR_VIF_POOL_BATCH=${KURYR_VIF_POOL_BATCH:-10}
KURYR_VIF_POOL_UPDATE_FREQ=${KURYR_VIF_POOL_UPDATE_FREQ:-20}
# Kuryr VIF Pool Manager
KURYR_VIF_POOL_MANAGER=${KURYR_VIF_POOL_MANAGER:-False}

View File

@ -43,6 +43,11 @@ running. 4GB memory and 2 vCPUs, is the minimum resource requirement for the VM:
.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pools.html
- [OPTIONAL] If you want to enable the subport pools driver and the
VIF Pool Manager you need to include::
KURYR_VIF_POOL_MANAGER=True
4. Once devstack is done and all services are up inside VM. Next steps are to
configure the missing information at ``/etc/kuryr/kuryr.conf``:

View File

@ -92,6 +92,9 @@ k8s_opts = [
help=_("The driver that provides external IP for LB at "
"Kubernetes"),
default='neutron_floating_ip'),
cfg.BoolOpt('enable_manager',
help=_("Enable Manager to manage the pools."),
default=False),
]
neutron_defaults = [

View File

@ -37,3 +37,6 @@ KURYR_PORT_NAME = 'kuryr-pool-port'
OCTAVIA_L2_MEMBER_MODE = "L2"
OCTAVIA_L3_MEMBER_MODE = "L3"
VIF_POOL_POPULATE = '/populatePool'
VIF_POOL_FREE = '/freePool'

View File

@ -31,6 +31,7 @@ from kuryr_kubernetes import config
from kuryr_kubernetes import constants
from kuryr_kubernetes.controller.drivers import base
from kuryr_kubernetes.controller.drivers import default_subnet
from kuryr_kubernetes.controller.managers import pool
from kuryr_kubernetes import exceptions
from kuryr_kubernetes import os_vif_util as ovu
@ -329,6 +330,16 @@ class NestedVIFPool(BaseVIFPool):
"""
_known_trunk_ids = collections.defaultdict(str)
def __init__(self):
super(NestedVIFPool, self).__init__()
# Start the pool manager so that pools can be populated/freed on
# demand
if config.CONF.kubernetes.enable_manager:
self._pool_manager = pool.PoolManager()
def set_vif_driver(self, driver):
self._drv_vif = driver
def _get_port_from_pool(self, pool_key, pod, subnets):
try:
port_id = self._available_ports_pools[pool_key].pop()

View File

@ -0,0 +1,174 @@
# Copyright 2017 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import threading
from six.moves.BaseHTTPServer import BaseHTTPRequestHandler
from six.moves.socketserver import ThreadingUnixStreamServer
from neutronclient.common import exceptions as n_exc
from oslo_config import cfg as oslo_cfg
from oslo_log import log as logging
from oslo_serialization import jsonutils
from kuryr.lib._i18n import _
from kuryr_kubernetes import constants
from kuryr_kubernetes.controller.drivers import base as drivers
LOG = logging.getLogger(__name__)
pool_manager_opts = [
oslo_cfg.StrOpt('sock_file',
help=_("Absolute path to socket file that "
"will be used for communication with "
"the Pool Manager daemon"),
default='/run/kuryr/kuryr_manage.sock'),
]
oslo_cfg.CONF.register_opts(pool_manager_opts, "pool_manager")
class UnixDomainHttpServer(ThreadingUnixStreamServer):
pass
class RequestHandler(BaseHTTPRequestHandler):
protocol = "HTTP/1.0"
def do_POST(self):
content_length = int(self.headers.get('Content-Length', 0))
body = self.rfile.read(content_length)
params = dict(jsonutils.loads(body))
if self.path.endswith(constants.VIF_POOL_POPULATE):
trunk_ips = params.get('trunks', None)
num_ports = params.get('num_ports', 1)
if trunk_ips:
try:
self._create_subports(num_ports, trunk_ips)
except Exception:
response = ('Error while populating pool {0} with {1} '
'ports.'.format(trunk_ips, num_ports))
else:
response = ('Ports pool at {0} was populated with {1} '
'ports.'.format(trunk_ips, num_ports))
self.send_header('Content-Length', len(response))
self.end_headers()
self.wfile.write(response.encode())
else:
response = 'Trunk port IP(s) missing.'
self.send_header('Content-Length', len(response))
self.end_headers()
self.wfile.write(response.encode())
elif self.path.endswith(constants.VIF_POOL_FREE):
trunk_ips = params.get('trunks', None)
if not trunk_ips:
pool = "all"
else:
pool = trunk_ips
try:
self._delete_subports(trunk_ips)
except Exception:
response = 'Error freeing ports pool: {0}.'.format(pool)
else:
response = 'Ports pool belonging to {0} was freed.'.format(
pool)
self.send_header('Content-Length', len(response))
self.end_headers()
self.wfile.write(response.encode())
else:
response = 'Method not allowed.'
self.send_header('Content-Length', len(response))
self.end_headers()
self.wfile.write(response.encode())
def _create_subports(self, num_ports, trunk_ips):
try:
drv_project = drivers.PodProjectDriver.get_instance()
drv_subnets = drivers.PodSubnetsDriver.get_instance()
drv_sg = drivers.PodSecurityGroupsDriver.get_instance()
drv_vif = drivers.PodVIFDriver.get_instance()
drv_vif_pool = drivers.VIFPoolDriver.get_instance()
drv_vif_pool.set_vif_driver(drv_vif)
project_id = drv_project.get_project({})
security_groups = drv_sg.get_security_groups({}, project_id)
subnets = drv_subnets.get_subnets([], project_id)
except TypeError as ex:
LOG.error("Invalid driver type")
raise ex
for trunk_ip in trunk_ips:
try:
drv_vif_pool.force_populate_pool(
trunk_ip, project_id, subnets, security_groups, num_ports)
except n_exc.Conflict as ex:
LOG.error("VLAN Id conflict (already in use) at trunk %s",
trunk_ip)
raise ex
except n_exc.NeutronClientException as ex:
LOG.error("Error happended during subports addition at trunk: "
" %s", trunk_ip)
raise ex
def _delete_subports(self, trunk_ips):
try:
drv_vif = drivers.PodVIFDriver.get_instance()
drv_vif_pool = drivers.VIFPoolDriver.get_instance()
drv_vif_pool.set_vif_driver(drv_vif)
drv_vif_pool.free_pool(trunk_ips)
except TypeError as ex:
LOG.error("Invalid driver type")
raise ex
class PoolManager(object):
"""Manages the ports pool enabling population and free actions.
`PoolManager` runs on the Kuryr-kubernetes controller and allows to
populate specific pools with a given amount of ports. In addition, it also
allows to remove all the (unused) ports in the given pool(s), or from all
of the pool if none of them is specified.
"""
def __init__(self):
pool_manager = threading.Thread(target=self._start_kuryr_manage_daemon)
pool_manager.setDaemon(True)
pool_manager.start()
def _start_kuryr_manage_daemon(self):
LOG.info("Pool manager started")
server_address = oslo_cfg.CONF.pool_manager.sock_file
try:
os.unlink(server_address)
except OSError:
if os.path.exists(server_address):
raise
try:
httpd = UnixDomainHttpServer(server_address, RequestHandler)
httpd.serve_forever()
except KeyboardInterrupt:
pass
except Exception:
LOG.exception('Failed to start Pool Manager.')
httpd.socket.close()

View File

@ -17,6 +17,7 @@ from kuryr.lib import opts as lib_opts
from kuryr_kubernetes import config
from kuryr_kubernetes.controller.drivers import nested_vif
from kuryr_kubernetes.controller.drivers import vif_pool
from kuryr_kubernetes.controller.managers import pool
_kuryr_k8s_opts = [
('kubernetes', config.k8s_opts),
@ -25,6 +26,7 @@ _kuryr_k8s_opts = [
('pod_vif_nested', nested_vif.nested_vif_driver_opts),
('vif_pool', vif_pool.vif_pool_driver_opts),
('octavia_defaults', config.octavia_defaults),
('pool_manager', pool.pool_manager_opts),
]

View File

@ -0,0 +1,184 @@
# Copyright 2017 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import mock
try:
from BaseHTTPServer import BaseHTTPRequestHandler
except ImportError:
from http.server import BaseHTTPRequestHandler
from oslo_serialization import jsonutils
from kuryr_kubernetes.controller.managers import pool as m_pool
from kuryr_kubernetes.tests import base as test_base
class TestRequestHandler(test_base.TestCase):
def setUp(self):
super(TestRequestHandler, self).setUp()
client_address = 'localhost'
server = '/tmp/server.lock'
req = mock.MagicMock()
with mock.patch.object(BaseHTTPRequestHandler, '__init__') as m_http:
m_http.return_value = None
self._req_handler = m_pool.RequestHandler(req, client_address,
server)
self._req_handler.rfile = mock.Mock()
self._req_handler.wfile = mock.Mock()
def _do_POST_helper(self, method, path, headers, body, expected_resp,
trigger_exception, trunk_ips, num_ports=None):
self._req_handler.headers = headers
self._req_handler.path = path
with mock.patch.object(self._req_handler.rfile, 'read') as m_read,\
mock.patch.object(self._req_handler,
'_create_subports') as m_create,\
mock.patch.object(self._req_handler,
'_delete_subports') as m_delete:
m_read.return_value = body
if trigger_exception:
m_create.side_effect = Exception
m_delete.side_effect = Exception
with mock.patch.object(self._req_handler,
'send_header') as m_send_header,\
mock.patch.object(self._req_handler,
'end_headers') as m_end_headers,\
mock.patch.object(self._req_handler.wfile,
'write') as m_write:
self._req_handler.do_POST()
if method == 'create':
if trunk_ips:
m_create.assert_called_once_with(num_ports, trunk_ips)
else:
m_create.assert_not_called()
if method == 'delete':
m_delete.assert_called_once_with(trunk_ips)
m_send_header.assert_called_once_with('Content-Length',
len(expected_resp))
m_end_headers.assert_called_once()
m_write.assert_called_once_with(expected_resp)
def test_do_POST_populate(self):
method = 'create'
path = "http://localhost/populatePool"
trunk_ips = [u"10.0.0.6"]
num_ports = 3
body = jsonutils.dumps({"trunks": trunk_ips,
"num_ports": num_ports})
headers = {'Content-Type': 'application/json', 'Connection': 'close'}
headers['Content-Length'] = len(body)
trigger_exception = False
expected_resp = ('Ports pool at {} was populated with 3 ports.'
.format(trunk_ips)).encode()
self._do_POST_helper(method, path, headers, body, expected_resp,
trigger_exception, trunk_ips, num_ports)
def test_do_POST_populate_exception(self):
method = 'create'
path = "http://localhost/populatePool"
trunk_ips = [u"10.0.0.6"]
num_ports = 3
body = jsonutils.dumps({"trunks": trunk_ips,
"num_ports": num_ports})
headers = {'Content-Type': 'application/json', 'Connection': 'close'}
headers['Content-Length'] = len(body)
trigger_exception = True
expected_resp = ('Error while populating pool {0} with {1} ports.'
.format(trunk_ips, num_ports)).encode()
self._do_POST_helper(method, path, headers, body, expected_resp,
trigger_exception, trunk_ips, num_ports)
def test_do_POST_populate_no_trunks(self):
method = 'create'
path = "http://localhost/populatePool"
trunk_ips = []
num_ports = 3
body = jsonutils.dumps({"trunks": trunk_ips,
"num_ports": num_ports})
headers = {'Content-Type': 'application/json', 'Connection': 'close'}
headers['Content-Length'] = len(body)
trigger_exception = False
expected_resp = ('Trunk port IP(s) missing.'
.format(trunk_ips, num_ports)).encode()
self._do_POST_helper(method, path, headers, body, expected_resp,
trigger_exception, trunk_ips, num_ports)
def test_do_POST_free(self):
method = 'delete'
path = "http://localhost/freePool"
trunk_ips = [u"10.0.0.6"]
body = jsonutils.dumps({"trunks": trunk_ips})
headers = {'Content-Type': 'application/json', 'Connection': 'close'}
headers['Content-Length'] = len(body)
trigger_exception = False
expected_resp = ('Ports pool belonging to {0} was freed.'
.format(trunk_ips)).encode()
self._do_POST_helper(method, path, headers, body, expected_resp,
trigger_exception, trunk_ips)
def test_do_POST_free_exception(self):
method = 'delete'
path = "http://localhost/freePool"
trunk_ips = [u"10.0.0.6"]
body = jsonutils.dumps({"trunks": trunk_ips})
headers = {'Content-Type': 'application/json', 'Connection': 'close'}
headers['Content-Length'] = len(body)
trigger_exception = True
expected_resp = ('Error freeing ports pool: {0}.'
.format(trunk_ips)).encode()
self._do_POST_helper(method, path, headers, body, expected_resp,
trigger_exception, trunk_ips)
def test_do_POST_free_no_trunks(self):
method = 'delete'
path = "http://localhost/freePool"
trunk_ips = []
body = jsonutils.dumps({"trunks": trunk_ips})
headers = {'Content-Type': 'application/json', 'Connection': 'close'}
headers['Content-Length'] = len(body)
trigger_exception = False
expected_resp = ('Ports pool belonging to all was freed.').encode()
self._do_POST_helper(method, path, headers, body, expected_resp,
trigger_exception, trunk_ips)
def test_do_POST_wrong_action(self):
method = 'fake'
path = "http://localhost/fakeMethod"
trunk_ips = [u"10.0.0.6"]
body = jsonutils.dumps({"trunks": trunk_ips})
headers = {'Content-Type': 'application/json', 'Connection': 'close'}
headers['Content-Length'] = len(body)
trigger_exception = False
expected_resp = ('Method not allowed.').encode()
self._do_POST_helper(method, path, headers, body, expected_resp,
trigger_exception, trunk_ips)