Add Service Function Chain support(1)
1. What is the problem Tricircle does not support Service Function Chain 2. What is the solution to the problem Add Service Function Chain support 3. What the features need to be implemented to the Tricircle Add create, delete support Change-Id: I4bbe169040c17d202410bb5a649aca4df8464c3c
This commit is contained in:
parent
582e2becc8
commit
2d22bb18bf
|
@ -19,6 +19,7 @@ Contents
|
|||
configuration
|
||||
networking-guide
|
||||
vlan-aware-vms-guide
|
||||
service-function-chaining-guide
|
||||
api_v1
|
||||
contributing
|
||||
|
||||
|
|
|
@ -0,0 +1,120 @@
|
|||
===============================
|
||||
Service Function Chaining Guide
|
||||
===============================
|
||||
|
||||
Service Function Chaining provides the ability to define an ordered list of
|
||||
network services (e.g. firewalls, load balancers). These services are then
|
||||
“stitched” together in the network to create a service chain.
|
||||
|
||||
|
||||
Installation
|
||||
^^^^^^^^^^^^
|
||||
|
||||
After installing tricircle, please refer to
|
||||
https://docs.openstack.org/developer/networking-sfc/installation.html
|
||||
to install networking-sfc.
|
||||
|
||||
Configuration
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
- 1 Configure central Neutron server
|
||||
|
||||
After installing the Tricircle and networing-sfc, enable the service plugins
|
||||
in central Neutron server by adding them in ``neutron.conf.0``
|
||||
(typically found in ``/etc/neutron/``)::
|
||||
|
||||
service_plugins=networking_sfc.services.flowclassifier.plugin.FlowClassifierPlugin,tricircle.network.central_sfc_plugin.TricircleSfcPlugin
|
||||
|
||||
In the same configuration file, specify the driver to use in the plugins. ::
|
||||
|
||||
[sfc]
|
||||
drivers = tricircle_sfc
|
||||
|
||||
[flowclassifier]
|
||||
drivers = tricircle_fc
|
||||
|
||||
- 2 Configure local Neutron
|
||||
|
||||
Please refer to https://docs.openstack.org/developer/networking-sfc/installation.html#Configuration
|
||||
to config local networking-sfc.
|
||||
|
||||
|
||||
How to play
|
||||
^^^^^^^^^^^
|
||||
|
||||
- 1 Create pods via Tricircle Admin API
|
||||
|
||||
- 2 Create necessary resources in central Neutron server ::
|
||||
|
||||
neutron --os-region-name=CentralRegion net-create --provider:network_type vxlan net1
|
||||
neutron --os-region-name=CentralRegion subnet-create net1 10.0.0.0/24
|
||||
neutron --os-region-name=CentralRegion port-create net1 --name p1
|
||||
neutron --os-region-name=CentralRegion port-create net1 --name p2
|
||||
neutron --os-region-name=CentralRegion port-create net1 --name p3
|
||||
neutron --os-region-name=CentralRegion port-create net1 --name p4
|
||||
neutron --os-region-name=CentralRegion port-create net1 --name p5
|
||||
neutron --os-region-name=CentralRegion port-create net1 --name p6
|
||||
|
||||
Please note that network type must be vxlan.
|
||||
|
||||
- 3 Get image ID and flavor ID which will be used in VM booting. In the following step,
|
||||
the VM will boot from RegionOne and RegionTwo. ::
|
||||
|
||||
glance --os-region-name=RegionOne image-list
|
||||
nova --os-region-name=RegionOne flavor-list
|
||||
glance --os-region-name=RegionTwo image-list
|
||||
nova --os-region-name=RegionTwo flavor-list
|
||||
|
||||
- 4 Boot virtual machines ::
|
||||
|
||||
openstack --os-region-name=RegionOne server create --flavor 1 --image $image1_id --nic port-id=$p1_id vm_src
|
||||
openstack --os-region-name=RegionOne server create --flavor 1 --image $image1_id --nic port-id=$p2_id --nic port-id=$p3_id vm_sfc1
|
||||
openstack --os-region-name=RegionTwo server create --flavor 1 --image $image2_id --nic port-id=$p4_id --nic port-id=$p5_id vm_sfc2
|
||||
openstack --os-region-name=RegionTwo server create --flavor 1 --image $image2_id --nic port-id=$p6_id vm_dst
|
||||
|
||||
- 5 Create port pairs in central Neutron server ::
|
||||
|
||||
neutron --os-region-name=CentralRegion port-pair-create --ingress p2 --egress p3 pp1
|
||||
neutron --os-region-name=CentralRegion port-pair-create --ingress p4 --egress p5 pp2
|
||||
|
||||
- 6 Create port pair groups in central Neutron server ::
|
||||
|
||||
neutron --os-region-name=CentralRegion port-pair-group-create --port-pair pp1 ppg1
|
||||
neutron --os-region-name=CentralRegion port-pair-group-create --port-pair pp2 ppg2
|
||||
|
||||
- 7 Create flow classifier in central Neutron server ::
|
||||
|
||||
neutron --os-region-name=CentralRegion flow-classifier-create --source-ip-prefix 10.0.0.0/24 --logical-source-port p1 fc1
|
||||
|
||||
- 8 Create port chain in central Neutron server ::
|
||||
|
||||
neutron --os-region-name=CentralRegion port-chain-create --flow-classifier fc1 --port-pair-group ppg1 --port-pair-group ppg2 pc1
|
||||
|
||||
- 9 Show result in CentralRegion, RegionOne and RegionTwo ::
|
||||
|
||||
neutron --os-region-name=CentralRegion port-chain-list
|
||||
neutron --os-region-name=RegionOne port-chain-list
|
||||
neutron --os-region-name=RegionTwo port-chain-list
|
||||
|
||||
You will find a same port chain in each region.
|
||||
|
||||
- 10 Check if the port chain is working
|
||||
|
||||
In vm_dst, ping the p1's ip address, it should fail.
|
||||
|
||||
Enable vm_sfc1, vm_sfc2's forwarding function ::
|
||||
|
||||
sudo sh
|
||||
echo 1 > /proc/sys/net/ipv4/ip_forward
|
||||
|
||||
Add the following route for vm_sfc1, vm_sfc2 ::
|
||||
|
||||
sudo ip route add $p6_ip_address dev eth1
|
||||
|
||||
In vm_dst, ping the p1's ip address, it should be successfully this time.
|
||||
|
||||
.. note:: Not all images will bring up the second NIC, so you can ssh into vm, use
|
||||
"ifconfig -a" to check whether all NICs are up, and bring up all NICs if necessary.
|
||||
In CirrOS you can type the following command to bring up one NIC. ::
|
||||
|
||||
sudo cirros-dhcpc up $nic_name
|
|
@ -0,0 +1,6 @@
|
|||
---
|
||||
features:
|
||||
- |
|
||||
Support service function chaining creation and deletion based on networking-sfc,
|
||||
currently all the ports in the port chain need to be in the same network and the
|
||||
network type must be VxLAN.
|
|
@ -56,9 +56,12 @@ oslo.config.opts =
|
|||
tricircle.db = tricircle.db.opts:list_opts
|
||||
tricircle.network = tricircle.network.opts:list_opts
|
||||
tricircle.xjob = tricircle.xjob.opts:list_opts
|
||||
|
||||
tricircle.network.type_drivers =
|
||||
local = tricircle.network.drivers.type_local:LocalTypeDriver
|
||||
vlan = tricircle.network.drivers.type_vlan:VLANTypeDriver
|
||||
vxlan = tricircle.network.drivers.type_vxlan:VxLANTypeDriver
|
||||
flat = tricircle.network.drivers.type_flat:FlatTypeDriver
|
||||
networking_sfc.flowclassifier.drivers =
|
||||
tricircle_fc = tricircle.network.central_fc_driver:TricircleFcDriver
|
||||
networking_sfc.sfc.drivers =
|
||||
tricircle_sfc = tricircle.network.central_sfc_driver:TricircleSfcDriver
|
1
tox.ini
1
tox.ini
|
@ -14,6 +14,7 @@ setenv =
|
|||
deps =
|
||||
-r{toxinidir}/test-requirements.txt
|
||||
-egit+https://git.openstack.org/openstack/neutron@master#egg=neutron
|
||||
-egit+https://git.openstack.org/openstack/networking-sfc@master#egg=networking-sfc
|
||||
commands =
|
||||
rm -Rf .testrepository/times.dbm
|
||||
python setup.py testr --slowest --testr-args='{posargs}'
|
||||
|
|
|
@ -114,7 +114,8 @@ class AsyncJobController(rest.RestController):
|
|||
|
||||
# if job_type = seg_rule_setup, we should ensure the project id
|
||||
# is equal to the one from resource.
|
||||
if job_type == constants.JT_SEG_RULE_SETUP:
|
||||
if job_type in (constants.JT_SEG_RULE_SETUP,
|
||||
constants.JT_RESOURCE_RECYCLE):
|
||||
if job['project_id'] != job['resource']['project_id']:
|
||||
msg = (_("Specified project_id %(project_id_1)s and resource's"
|
||||
" project_id %(project_id_2)s are different") %
|
||||
|
|
|
@ -28,6 +28,10 @@ RT_SD_SUBNET = 'shadow_subnet'
|
|||
RT_PORT = 'port'
|
||||
RT_TRUNK = 'trunk'
|
||||
RT_SD_PORT = 'shadow_port'
|
||||
RT_PORT_PAIR = 'port_pair'
|
||||
RT_PORT_PAIR_GROUP = 'port_pair_group'
|
||||
RT_FLOW_CLASSIFIER = 'flow_classifier'
|
||||
RT_PORT_CHAIN = 'port_chain'
|
||||
RT_ROUTER = 'router'
|
||||
RT_NS_ROUTER = 'ns_router'
|
||||
RT_SG = 'security_group'
|
||||
|
@ -73,6 +77,9 @@ interface_port_device_id = 'reserved_gateway_port'
|
|||
MAX_INT = 0x7FFFFFFF
|
||||
DEFAULT_DESTINATION = '0.0.0.0/0'
|
||||
expire_time = datetime.datetime(2000, 1, 1)
|
||||
STR_IN_USE = 'in use'
|
||||
STR_USED_BY = 'used by'
|
||||
STR_CONFLICTS_WITH = 'conflicts with'
|
||||
|
||||
# job status
|
||||
JS_New = '3_New'
|
||||
|
@ -102,6 +109,8 @@ JT_NETWORK_UPDATE = 'update_network'
|
|||
JT_SUBNET_UPDATE = 'subnet_update'
|
||||
JT_SHADOW_PORT_SETUP = 'shadow_port_setup'
|
||||
JT_TRUNK_SYNC = 'trunk_sync'
|
||||
JT_SFC_SYNC = 'sfc_sync'
|
||||
JT_RESOURCE_RECYCLE = 'resource_recycle'
|
||||
|
||||
# network type
|
||||
NT_LOCAL = 'local'
|
||||
|
@ -133,7 +142,11 @@ job_resource_map = {
|
|||
JT_SUBNET_UPDATE: [(None, "pod_id"),
|
||||
(RT_SUBNET, "subnet_id")],
|
||||
JT_SHADOW_PORT_SETUP: [(None, "pod_id"),
|
||||
(RT_NETWORK, "network_id")]
|
||||
(RT_NETWORK, "network_id")],
|
||||
JT_SFC_SYNC: [(None, "pod_id"),
|
||||
(RT_PORT_CHAIN, "portchain_id"),
|
||||
(RT_NETWORK, "network_id")],
|
||||
JT_RESOURCE_RECYCLE: [(None, "project_id")]
|
||||
}
|
||||
|
||||
# map raw job status to more human readable job status
|
||||
|
@ -156,7 +169,9 @@ job_handles = {
|
|||
JT_NETWORK_UPDATE: "update_network",
|
||||
JT_SUBNET_UPDATE: "update_subnet",
|
||||
JT_TRUNK_SYNC: "sync_trunk",
|
||||
JT_SHADOW_PORT_SETUP: "setup_shadow_ports"
|
||||
JT_SHADOW_PORT_SETUP: "setup_shadow_ports",
|
||||
JT_SFC_SYNC: "sync_service_function_chain",
|
||||
JT_RESOURCE_RECYCLE: "recycle_resources"
|
||||
}
|
||||
|
||||
# map job type to its primary resource and then we only validate the project_id
|
||||
|
@ -170,5 +185,7 @@ job_primary_resource_map = {
|
|||
JT_NETWORK_UPDATE: (RT_NETWORK, "network_id"),
|
||||
JT_SUBNET_UPDATE: (RT_SUBNET, "subnet_id"),
|
||||
JT_TRUNK_SYNC: (RT_TRUNK, "trunk_id"),
|
||||
JT_SHADOW_PORT_SETUP: (RT_NETWORK, "network_id")
|
||||
JT_SHADOW_PORT_SETUP: (RT_NETWORK, "network_id"),
|
||||
JT_SFC_SYNC: (RT_PORT_CHAIN, "portchain_id"),
|
||||
JT_RESOURCE_RECYCLE: (None, "project_id")
|
||||
}
|
||||
|
|
|
@ -94,7 +94,11 @@ class NeutronResourceHandle(ResourceHandle):
|
|||
'security_group': LIST | CREATE | GET,
|
||||
'security_group_rule': LIST | CREATE | DELETE,
|
||||
'floatingip': LIST | CREATE | UPDATE | DELETE,
|
||||
'trunk': LIST | CREATE | UPDATE | GET | DELETE | ACTION}
|
||||
'trunk': LIST | CREATE | UPDATE | GET | DELETE | ACTION,
|
||||
'port_chain': LIST | CREATE | DELETE | GET | UPDATE,
|
||||
'port_pair_group': LIST | CREATE | DELETE | GET | UPDATE,
|
||||
'port_pair': LIST | CREATE | DELETE | GET | UPDATE,
|
||||
'flow_classifier': LIST | CREATE | DELETE | GET | UPDATE}
|
||||
|
||||
def _get_client(self, cxt):
|
||||
token = cxt.auth_token
|
||||
|
|
|
@ -126,3 +126,17 @@ class XJobAPI(object):
|
|||
self.invoke_method(
|
||||
t_ctx, project_id, constants.job_handles[constants.JT_TRUNK_SYNC],
|
||||
constants.JT_TRUNK_SYNC, '%s#%s' % (pod_id, trunk_id))
|
||||
|
||||
def sync_service_function_chain(self, ctxt, project_id, portchain_id,
|
||||
net_id, pod_id):
|
||||
self.invoke_method(
|
||||
ctxt, project_id,
|
||||
constants.job_handles[constants.JT_SFC_SYNC],
|
||||
constants.JT_SFC_SYNC,
|
||||
'%s#%s#%s' % (pod_id, portchain_id, net_id))
|
||||
|
||||
def recycle_resources(self, ctxt, project_id):
|
||||
self.invoke_method(
|
||||
ctxt, project_id,
|
||||
constants.job_handles[constants.JT_RESOURCE_RECYCLE],
|
||||
constants.JT_RESOURCE_RECYCLE, project_id)
|
||||
|
|
|
@ -678,3 +678,33 @@ def is_valid_model_filters(model, filters):
|
|||
if not hasattr(model, key):
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def create_recycle_resource(context, resource_id, resource_type, project_id):
|
||||
try:
|
||||
context.session.begin()
|
||||
route = core.create_resource(context, models.RecycleResources,
|
||||
{'resource_id': resource_id,
|
||||
'resource_type': resource_type,
|
||||
'project_id': project_id})
|
||||
context.session.commit()
|
||||
return route
|
||||
except db_exc.DBDuplicateEntry:
|
||||
# entry has already been created
|
||||
context.session.rollback()
|
||||
return None
|
||||
finally:
|
||||
context.session.close()
|
||||
|
||||
|
||||
def list_recycle_resources(context, filters=None, sorts=None):
|
||||
with context.session.begin():
|
||||
resources = core.query_resource(
|
||||
context, models.RecycleResources, filters or [], sorts or [])
|
||||
return resources
|
||||
|
||||
|
||||
def delete_recycle_resource(context, resource_id):
|
||||
with context.session.begin():
|
||||
return core.delete_resource(
|
||||
context, models.RecycleResources, resource_id)
|
||||
|
|
|
@ -0,0 +1,36 @@
|
|||
# Copyright 2017 Huawei Technologies Co., Ltd.
|
||||
# All Rights Reserved
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import sqlalchemy as sql
|
||||
|
||||
|
||||
def upgrade(migrate_engine):
|
||||
meta = sql.MetaData()
|
||||
meta.bind = migrate_engine
|
||||
|
||||
recycle_resources = sql.Table(
|
||||
'recycle_resources', meta,
|
||||
sql.Column('resource_id', sql.String(length=36), primary_key=True),
|
||||
sql.Column('resource_type', sql.String(length=64), nullable=False),
|
||||
sql.Column('project_id', sql.String(length=36),
|
||||
nullable=False, index=True),
|
||||
mysql_engine='InnoDB',
|
||||
mysql_charset='utf8')
|
||||
|
||||
recycle_resources.create()
|
||||
|
||||
|
||||
def downgrade(migrate_engine):
|
||||
raise NotImplementedError('downgrade not support')
|
|
@ -0,0 +1,24 @@
|
|||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from sqlalchemy import MetaData, Table
|
||||
from sqlalchemy import Index
|
||||
|
||||
|
||||
def upgrade(migrate_engine):
|
||||
meta = MetaData(bind=migrate_engine)
|
||||
resource_routings = Table('resource_routings', meta, autoload=True)
|
||||
index = Index('resource_routings0bottom_id',
|
||||
resource_routings.c.bottom_id)
|
||||
index.create()
|
|
@ -71,7 +71,7 @@ class ResourceRouting(core.ModelBase, core.DictBase, models.TimestampMixin):
|
|||
id = sql.Column(sql.BigInteger().with_variant(sql.Integer(), 'sqlite'),
|
||||
primary_key=True, autoincrement=True)
|
||||
top_id = sql.Column('top_id', sql.String(length=127), nullable=False)
|
||||
bottom_id = sql.Column('bottom_id', sql.String(length=36))
|
||||
bottom_id = sql.Column('bottom_id', sql.String(length=36), index=True)
|
||||
pod_id = sql.Column('pod_id', sql.String(length=36),
|
||||
sql.ForeignKey('pods.pod_id'),
|
||||
nullable=False)
|
||||
|
@ -134,3 +134,16 @@ class ShadowAgent(core.ModelBase, core.DictBase):
|
|||
type = sql.Column('type', sql.String(length=36), nullable=False)
|
||||
# considering IPv6 address, set the length to 48
|
||||
tunnel_ip = sql.Column('tunnel_ip', sql.String(length=48), nullable=False)
|
||||
|
||||
|
||||
class RecycleResources(core.ModelBase, core.DictBase):
|
||||
__tablename__ = 'recycle_resources'
|
||||
|
||||
attributes = ['resource_id', 'resource_type', 'project_id']
|
||||
|
||||
resource_id = sql.Column('resource_id',
|
||||
sql.String(length=36), primary_key=True)
|
||||
resource_type = sql.Column('resource_type',
|
||||
sql.String(length=64), nullable=False)
|
||||
project_id = sql.Column('project_id',
|
||||
sql.String(length=36), nullable=False, index=True)
|
||||
|
|
|
@ -0,0 +1,84 @@
|
|||
# Copyright 2017 Huawei Technologies Co., Ltd.
|
||||
# All Rights Reserved
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_log import helpers as log_helpers
|
||||
from oslo_log import log
|
||||
|
||||
from networking_sfc.services.flowclassifier.drivers import base as fc_driver
|
||||
|
||||
from neutronclient.common import exceptions as client_exceptions
|
||||
|
||||
import tricircle.common.client as t_client
|
||||
import tricircle.common.constants as t_constants
|
||||
import tricircle.common.context as t_context
|
||||
from tricircle.common import xrpcapi
|
||||
import tricircle.db.api as db_api
|
||||
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class TricircleFcDriver(fc_driver.FlowClassifierDriverBase):
|
||||
|
||||
def __init__(self):
|
||||
self.xjob_handler = xrpcapi.XJobAPI()
|
||||
self.clients = {}
|
||||
|
||||
def initialize(self):
|
||||
pass
|
||||
|
||||
def _get_client(self, region_name):
|
||||
if region_name not in self.clients:
|
||||
self.clients[region_name] = t_client.Client(region_name)
|
||||
return self.clients[region_name]
|
||||
|
||||
@log_helpers.log_method_call
|
||||
def create_flow_classifier(self, context):
|
||||
pass
|
||||
|
||||
@log_helpers.log_method_call
|
||||
def update_flow_classifier(self, context):
|
||||
pass
|
||||
|
||||
@log_helpers.log_method_call
|
||||
def delete_flow_classifier(self, context):
|
||||
t_ctx = t_context.get_context_from_neutron_context(
|
||||
context._plugin_context)
|
||||
flowclassifier_id = context.current['id']
|
||||
mappings = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, flowclassifier_id, t_constants.RT_FLOW_CLASSIFIER)
|
||||
for b_pod, b_classifier_id in mappings:
|
||||
b_region_name = b_pod['region_name']
|
||||
b_client = self._get_client(b_region_name)
|
||||
try:
|
||||
b_client.delete_flow_classifiers(t_ctx, b_classifier_id)
|
||||
except client_exceptions.NotFound:
|
||||
LOG.debug(('flow classifier: %(classifier_id)s not found, '
|
||||
'region name: %(name)s'),
|
||||
{'classifier_id': flowclassifier_id,
|
||||
'name': b_region_name})
|
||||
db_api.delete_mappings_by_bottom_id(t_ctx, b_classifier_id)
|
||||
|
||||
def delete_flow_classifier_precommit(self, context):
|
||||
t_ctx = t_context.get_context_from_neutron_context(
|
||||
context._plugin_context)
|
||||
flowclassifier_id = context.current['id']
|
||||
db_api.create_recycle_resource(
|
||||
t_ctx, flowclassifier_id, t_constants.RT_FLOW_CLASSIFIER,
|
||||
t_ctx.project_id)
|
||||
|
||||
@log_helpers.log_method_call
|
||||
def create_flow_classifier_precommit(self, context):
|
||||
pass
|
|
@ -0,0 +1,181 @@
|
|||
# Copyright 2017 Huawei Technologies Co., Ltd.
|
||||
# All Rights Reserved
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_log import helpers as log_helpers
|
||||
|
||||
from networking_sfc.services.sfc.drivers import base as sfc_driver
|
||||
|
||||
from oslo_log import log
|
||||
|
||||
from neutron_lib.plugins import directory
|
||||
from neutronclient.common import exceptions as client_exceptions
|
||||
|
||||
import tricircle.common.client as t_client
|
||||
import tricircle.common.constants as t_constants
|
||||
import tricircle.common.context as t_context
|
||||
from tricircle.common import xrpcapi
|
||||
import tricircle.db.api as db_api
|
||||
from tricircle.network import central_plugin
|
||||
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class TricircleSfcDriver(sfc_driver.SfcDriverBase):
|
||||
|
||||
def __init__(self):
|
||||
self.xjob_handler = xrpcapi.XJobAPI()
|
||||
self.clients = {}
|
||||
|
||||
def initialize(self):
|
||||
pass
|
||||
|
||||
def _get_client(self, region_name):
|
||||
if region_name not in self.clients:
|
||||
self.clients[region_name] = t_client.Client(region_name)
|
||||
return self.clients[region_name]
|
||||
|
||||
def _get_net_id_by_portpairgroups(self, context,
|
||||
sfc_plugin, port_pair_groups):
|
||||
if not port_pair_groups:
|
||||
return None
|
||||
port_pairs = sfc_plugin.get_port_pairs(
|
||||
context, {'portpairgroup_id': port_pair_groups})
|
||||
if not port_pairs:
|
||||
return None
|
||||
# currently we only support port pairs in the same network
|
||||
first_ingress = port_pairs[0]['ingress']
|
||||
core_plugin = directory.get_plugin()
|
||||
ingress_port = super(central_plugin.TricirclePlugin, core_plugin
|
||||
).get_port(context, first_ingress)
|
||||
return ingress_port['network_id']
|
||||
|
||||
@log_helpers.log_method_call
|
||||
def create_port_chain(self, context):
|
||||
pass
|
||||
|
||||
@log_helpers.log_method_call
|
||||
def create_port_chain_precommit(self, context):
|
||||
plugin_context = context._plugin_context
|
||||
t_ctx = t_context.get_context_from_neutron_context(plugin_context)
|
||||
port_chain = context.current
|
||||
net_id = self._get_net_id_by_portpairgroups(
|
||||
plugin_context, context._plugin, port_chain['port_pair_groups'])
|
||||
if net_id:
|
||||
self.xjob_handler.sync_service_function_chain(
|
||||
t_ctx, port_chain['project_id'], port_chain['id'], net_id,
|
||||
t_constants.POD_NOT_SPECIFIED)
|
||||
|
||||
@log_helpers.log_method_call
|
||||
def delete_port_chain(self, context):
|
||||
t_ctx = t_context.get_context_from_neutron_context(
|
||||
context._plugin_context)
|
||||
portchain_id = context.current['id']
|
||||
mappings = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, portchain_id, t_constants.RT_PORT_CHAIN)
|
||||
for b_pod, b_porchain_id in mappings:
|
||||
b_region_name = b_pod['region_name']
|
||||
b_client = self._get_client(region_name=b_region_name)
|
||||
try:
|
||||
b_client.delete_port_chains(t_ctx, b_porchain_id)
|
||||
except client_exceptions.NotFound:
|
||||
LOG.debug(('port chain: %(portchain_id)s not found, '
|
||||
'region name: %(name)s'),
|
||||
{'portchain_id': portchain_id,
|
||||
'name': b_region_name})
|
||||
db_api.delete_mappings_by_bottom_id(t_ctx, b_porchain_id)
|
||||
|
||||
@log_helpers.log_method_call
|
||||
def delete_port_chain_precommit(self, context):
|
||||
t_ctx = t_context.get_context_from_neutron_context(
|
||||
context._plugin_context)
|
||||
portchain_id = context.current['id']
|
||||
db_api.create_recycle_resource(
|
||||
t_ctx, portchain_id, t_constants.RT_PORT_CHAIN,
|
||||
t_ctx.project_id)
|
||||
|
||||
@log_helpers.log_method_call
|
||||
def delete_port_pair_group(self, context):
|
||||
t_ctx = t_context.get_context_from_neutron_context(
|
||||
context._plugin_context)
|
||||
portpairgroup_id = context.current['id']
|
||||
mappings = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, portpairgroup_id, t_constants.RT_PORT_PAIR_GROUP)
|
||||
for b_pod, b_portpairgroup_id in mappings:
|
||||
b_region_name = b_pod['region_name']
|
||||
b_client = self._get_client(b_region_name)
|
||||
|
||||
try:
|
||||
b_client.delete_port_pair_groups(t_ctx, b_portpairgroup_id)
|
||||
except client_exceptions.NotFound:
|
||||
LOG.debug(('port pair group: %(portpairgroup_id)s not found, '
|
||||
'region name: %(name)s'),
|
||||
{'portpairgroup_id': portpairgroup_id,
|
||||
'name': b_region_name})
|
||||
db_api.delete_mappings_by_bottom_id(t_ctx, b_portpairgroup_id)
|
||||
|
||||
def delete_port_pair_group_precommit(self, context):
|
||||
t_ctx = t_context.get_context_from_neutron_context(
|
||||
context._plugin_context)
|
||||
portpairgroup_id = context.current['id']
|
||||
db_api.create_recycle_resource(
|
||||
t_ctx, portpairgroup_id, t_constants.RT_PORT_PAIR_GROUP,
|
||||
t_ctx.project_id)
|
||||
|
||||
@log_helpers.log_method_call
|
||||
def delete_port_pair(self, context):
|
||||
t_ctx = t_context.get_context_from_neutron_context(
|
||||
context._plugin_context)
|
||||
portpair_id = context.current['id']
|
||||
mappings = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, portpair_id, t_constants.RT_PORT_PAIR)
|
||||
for b_pod, b_portpair_id in mappings:
|
||||
b_region_name = b_pod['region_name']
|
||||
b_client = self._get_client(b_region_name)
|
||||
try:
|
||||
b_client.delete_port_pairs(t_ctx, b_portpair_id)
|
||||
except client_exceptions.NotFound:
|
||||
LOG.debug(('port pair: %(portpair_id)s not found, '
|
||||
'region name: %(name)s'),
|
||||
{'portpair_id': portpair_id, 'name': b_region_name})
|
||||
db_api.delete_mappings_by_bottom_id(t_ctx, b_portpair_id)
|
||||
|
||||
def delete_port_pair_precommit(self, context):
|
||||
t_ctx = t_context.get_context_from_neutron_context(
|
||||
context._plugin_context)
|
||||
portpair_id = context.current['id']
|
||||
db_api.create_recycle_resource(
|
||||
t_ctx, portpair_id, t_constants.RT_PORT_PAIR,
|
||||
t_ctx.project_id)
|
||||
|
||||
@log_helpers.log_method_call
|
||||
def update_port_chain(self, context):
|
||||
pass
|
||||
|
||||
@log_helpers.log_method_call
|
||||
def create_port_pair_group(self, context):
|
||||
pass
|
||||
|
||||
@log_helpers.log_method_call
|
||||
def update_port_pair_group(self, context):
|
||||
pass
|
||||
|
||||
@log_helpers.log_method_call
|
||||
def create_port_pair(self, context):
|
||||
pass
|
||||
|
||||
@log_helpers.log_method_call
|
||||
def update_port_pair(self, context):
|
||||
pass
|
|
@ -0,0 +1,41 @@
|
|||
# Copyright 2017 Huawei Technologies Co., Ltd.
|
||||
# All Rights Reserved
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
|
||||
from oslo_log import log
|
||||
|
||||
from networking_sfc.extensions import sfc as sfc_ext
|
||||
from networking_sfc.services.sfc import plugin as sfc_plugin
|
||||
from neutron_lib import exceptions as n_exc
|
||||
from neutron_lib.plugins import directory
|
||||
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class TricircleSfcPlugin(sfc_plugin.SfcPlugin):
|
||||
|
||||
def __init__(self):
|
||||
super(TricircleSfcPlugin, self).__init__()
|
||||
|
||||
# TODO(xiulin): Tricircle's top region can not get port's
|
||||
# binding information well now, so override this function,
|
||||
# we will improve this later.
|
||||
def _get_port(self, context, id):
|
||||
core_plugin = directory.get_plugin()
|
||||
try:
|
||||
return core_plugin.get_port(context, id)
|
||||
except n_exc.PortNotFound:
|
||||
raise sfc_ext.PortPairPortNotFound(id=id)
|
|
@ -18,9 +18,12 @@ from oslo_log import log
|
|||
|
||||
from neutron.plugins.ml2 import driver_api
|
||||
from neutron.plugins.ml2.drivers import type_vxlan
|
||||
from neutron_lib import constants as q_lib_constants
|
||||
from neutron_lib import exceptions as n_exc
|
||||
|
||||
from tricircle.common import constants
|
||||
import tricircle.common.context as t_context
|
||||
import tricircle.db.api as db_api
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
@ -54,3 +57,13 @@ class VxLANTypeDriver(type_vxlan.VxlanTypeDriver):
|
|||
|
||||
def get_mtu(self, physical_network=None):
|
||||
pass
|
||||
|
||||
def get_endpoint_by_host(self, host):
|
||||
LOG.debug("get_endpoint_by_host() called for host %s", host)
|
||||
host_endpoint = {'ip_address': None}
|
||||
context = t_context.get_db_context()
|
||||
agents = db_api.get_agent_by_host_type(
|
||||
context, host, q_lib_constants.AGENT_TYPE_OVS)
|
||||
if agents:
|
||||
host_endpoint['ip_address'] = agents['tunnel_ip']
|
||||
return host_endpoint
|
||||
|
|
|
@ -928,6 +928,7 @@ class NetworkHelper(object):
|
|||
'fixed_ips'][0]['ip_address']}],
|
||||
'mac_address': port_body['mac_address'],
|
||||
'device_owner': t_constants.DEVICE_OWNER_SHADOW,
|
||||
'device_id': port_body['device_id'],
|
||||
portbindings.HOST_ID: host
|
||||
}
|
||||
}
|
||||
|
|
|
@ -590,6 +590,9 @@ class TricirclePlugin(plugin.Ml2Plugin):
|
|||
agent_state = helper.NetworkHelper.construct_agent_data(
|
||||
agent_type, agent_host, tunnel_ip)
|
||||
self.core_plugin.create_or_update_agent(context, agent_state)
|
||||
driver = self.core_plugin.type_manager.drivers.get('vxlan')
|
||||
if driver:
|
||||
driver.obj.add_endpoint(tunnel_ip, agent_host)
|
||||
|
||||
def _fill_agent_info_in_profile(self, context, port_id, host,
|
||||
profile_dict):
|
||||
|
|
|
@ -747,7 +747,8 @@ class TestAsyncJobController(API_FunctionalTest):
|
|||
# prepare the project id for job creation, currently job parameter
|
||||
# contains job type and job resource information.
|
||||
job_type = job['type']
|
||||
if job_type == constants.JT_SEG_RULE_SETUP:
|
||||
if job_type in (constants.JT_SEG_RULE_SETUP,
|
||||
constants.JT_RESOURCE_RECYCLE):
|
||||
project_id = job['resource']['project_id']
|
||||
else:
|
||||
project_id = uuidutils.generate_uuid()
|
||||
|
|
|
@ -593,7 +593,8 @@ class AsyncJobControllerTest(unittest.TestCase):
|
|||
# prepare the project id for job creation, currently job parameter
|
||||
# contains job type and job resource information.
|
||||
job_type = job['type']
|
||||
if job_type == constants.JT_SEG_RULE_SETUP:
|
||||
if job_type in (constants.JT_SEG_RULE_SETUP,
|
||||
constants.JT_RESOURCE_RECYCLE):
|
||||
project_id = job['resource']['project_id']
|
||||
else:
|
||||
project_id = uuidutils.generate_uuid()
|
||||
|
|
|
@ -2925,6 +2925,7 @@ class PluginTest(unittest.TestCase,
|
|||
'network_id': db_api.get_bottom_id_by_top_id_region_name(
|
||||
t_ctx, t_net_id, 'pod_1', constants.RT_NETWORK),
|
||||
'mac_address': 'fa:16:3e:96:41:03',
|
||||
'device_id': None,
|
||||
'fixed_ips': [
|
||||
{'subnet_id': db_api.get_bottom_id_by_top_id_region_name(
|
||||
t_ctx, t_subnet_id, 'pod_1', constants.RT_SUBNET),
|
||||
|
|
|
@ -0,0 +1,873 @@
|
|||
# Copyright 2017 Huawei Technologies Co., Ltd.
|
||||
# All Rights Reserved
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
|
||||
from mock import patch
|
||||
import six
|
||||
import unittest
|
||||
|
||||
from networking_sfc.db import sfc_db
|
||||
from networking_sfc.services.flowclassifier import plugin as fc_plugin
|
||||
|
||||
import neutron.conf.common as q_config
|
||||
from neutron.db import db_base_plugin_v2
|
||||
import neutron_lib.context as q_context
|
||||
from neutron_lib.plugins import directory
|
||||
from neutronclient.common import exceptions as client_exceptions
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_utils import uuidutils
|
||||
|
||||
from tricircle.common import client
|
||||
from tricircle.common import constants
|
||||
from tricircle.common import context
|
||||
import tricircle.db.api as db_api
|
||||
from tricircle.db import core
|
||||
from tricircle.db import models
|
||||
import tricircle.network.central_fc_driver as fc_driver
|
||||
from tricircle.network import central_plugin
|
||||
import tricircle.network.central_sfc_driver as sfc_driver
|
||||
import tricircle.network.central_sfc_plugin as sfc_plugin
|
||||
from tricircle.network import helper
|
||||
import tricircle.tests.unit.utils as test_utils
|
||||
from tricircle.xjob import xmanager
|
||||
|
||||
|
||||
_resource_store = test_utils.get_resource_store()
|
||||
TOP_PORTS = _resource_store.TOP_PORTS
|
||||
TOP_PORTPAIRS = _resource_store.TOP_SFC_PORT_PAIRS
|
||||
TOP_PORTPAIRGROUPS = _resource_store.TOP_SFC_PORT_PAIR_GROUPS
|
||||
TOP_PORTCHAINS = _resource_store.TOP_SFC_PORT_CHAINS
|
||||
TOP_FLOWCLASSIFIERS = _resource_store.TOP_SFC_FLOW_CLASSIFIERS
|
||||
BOTTOM1_PORTS = _resource_store.BOTTOM1_PORTS
|
||||
BOTTOM2_PORTS = _resource_store.BOTTOM2_PORTS
|
||||
BOTTOM1_PORTPAIRS = _resource_store.BOTTOM1_SFC_PORT_PAIRS
|
||||
BOTTOM2_PORTPAIRS = _resource_store.BOTTOM2_SFC_PORT_PAIRS
|
||||
BOTTOM1_PORTPAIRGROUPS = _resource_store.BOTTOM1_SFC_PORT_PAIR_GROUPS
|
||||
BOTTOM2_PORTPAIRGROUPS = _resource_store.BOTTOM2_SFC_PORT_PAIR_GROUPS
|
||||
BOTTOM1_PORTCHAINS = _resource_store.BOTTOM1_SFC_PORT_CHAINS
|
||||
BOTTOM2_PORTCHAINS = _resource_store.BOTTOM2_SFC_PORT_CHAINS
|
||||
BOTTOM1_FLOWCLASSIFIERS = _resource_store.BOTTOM1_SFC_FLOW_CLASSIFIERS
|
||||
BOTTOM2_FLOWCLASSIFIERS = _resource_store.BOTTOM2_SFC_FLOW_CLASSIFIERS
|
||||
TEST_TENANT_ID = test_utils.TEST_TENANT_ID
|
||||
DotDict = test_utils.DotDict
|
||||
|
||||
|
||||
class FakeNetworkHelper(helper.NetworkHelper):
|
||||
def __init__(self):
|
||||
super(FakeNetworkHelper, self).__init__()
|
||||
|
||||
def _get_client(self, region_name=None):
|
||||
return FakeClient(region_name)
|
||||
|
||||
|
||||
class FakeBaseXManager(xmanager.XManager):
|
||||
def __init__(self):
|
||||
self.clients = {constants.TOP: client.Client()}
|
||||
self.helper = FakeNetworkHelper()
|
||||
|
||||
def _get_client(self, region_name=None):
|
||||
return FakeClient(region_name)
|
||||
|
||||
def sync_service_function_chain(self, ctx, payload):
|
||||
(b_pod_id, t_port_chain_id, net_id) = payload[
|
||||
constants.JT_SFC_SYNC].split('#')
|
||||
|
||||
if b_pod_id == constants.POD_NOT_SPECIFIED:
|
||||
mappings = db_api.get_bottom_mappings_by_top_id(
|
||||
ctx, net_id, constants.RT_NETWORK)
|
||||
b_pods = [mapping[0] for mapping in mappings]
|
||||
for b_pod in b_pods:
|
||||
payload = '%s#%s#%s' % (b_pod['pod_id'], t_port_chain_id,
|
||||
net_id)
|
||||
super(FakeBaseXManager, self).sync_service_function_chain(
|
||||
ctx, {constants.JT_SFC_SYNC: payload})
|
||||
else:
|
||||
super(FakeBaseXManager, self).sync_service_function_chain(
|
||||
ctx, payload)
|
||||
|
||||
|
||||
class FakeXManager(FakeBaseXManager):
|
||||
def __init__(self, fake_plugin):
|
||||
super(FakeXManager, self).__init__()
|
||||
self.xjob_handler = FakeBaseRPCAPI(fake_plugin)
|
||||
|
||||
|
||||
class FakeBaseRPCAPI(object):
|
||||
def __init__(self, fake_plugin):
|
||||
self.xmanager = FakeBaseXManager()
|
||||
|
||||
def sync_service_function_chain(self, ctxt, project_id, portchain_id,
|
||||
net_id, pod_id):
|
||||
combine_id = '%s#%s#%s' % (pod_id, portchain_id, net_id)
|
||||
self.xmanager.sync_service_function_chain(
|
||||
ctxt,
|
||||
payload={constants.JT_SFC_SYNC: combine_id})
|
||||
|
||||
def recycle_resources(self, ctx, project_id):
|
||||
self.xmanager.recycle_resources(ctx, payload={
|
||||
constants.JT_RESOURCE_RECYCLE: project_id})
|
||||
|
||||
|
||||
class FakeRPCAPI(FakeBaseRPCAPI):
|
||||
def __init__(self, fake_plugin):
|
||||
self.xmanager = FakeXManager(fake_plugin)
|
||||
|
||||
|
||||
class FakeClient(test_utils.FakeClient):
|
||||
|
||||
def delete_resources(self, _type, ctx, _id):
|
||||
if _type == constants.RT_PORT_PAIR:
|
||||
pp = self.get_resource(constants.RT_PORT_PAIR, ctx, _id)
|
||||
if not pp:
|
||||
raise client_exceptions.NotFound()
|
||||
if pp['portpairgroup_id']:
|
||||
raise client_exceptions.Conflict(constants.STR_IN_USE)
|
||||
elif _type == constants.RT_FLOW_CLASSIFIER:
|
||||
pc_list = self._res_map[self.region_name][constants.RT_PORT_CHAIN]
|
||||
for pc in pc_list:
|
||||
if _id in pc['flow_classifiers']:
|
||||
raise client_exceptions.Conflict(constants.STR_IN_USE)
|
||||
|
||||
return super(FakeClient, self).delete_resources(_type, ctx, _id)
|
||||
|
||||
def create_resources(self, _type, ctx, body):
|
||||
if _type == constants.RT_PORT_PAIR:
|
||||
pp_list = self._res_map[self.region_name][constants.RT_PORT_PAIR]
|
||||
for pp in pp_list:
|
||||
if body[_type]['ingress'] == pp['ingress']:
|
||||
raise client_exceptions.BadRequest(constants.STR_USED_BY)
|
||||
elif _type == constants.RT_PORT_PAIR_GROUP:
|
||||
ppg_list = self._res_map[self.region_name][
|
||||
constants.RT_PORT_PAIR_GROUP]
|
||||
for pp in body[_type]['port_pairs']:
|
||||
for ppg in ppg_list:
|
||||
if pp in ppg['port_pairs']:
|
||||
raise client_exceptions.Conflict(constants.STR_IN_USE)
|
||||
elif _type == constants.RT_FLOW_CLASSIFIER:
|
||||
fc_list = self._res_map[self.region_name][
|
||||
constants.RT_FLOW_CLASSIFIER]
|
||||
for fc in fc_list:
|
||||
if (body[_type]['logical_source_port'] ==
|
||||
fc['logical_source_port']):
|
||||
raise client_exceptions.BadRequest(
|
||||
constants.STR_CONFLICTS_WITH)
|
||||
elif _type == constants.RT_PORT_CHAIN:
|
||||
pc_list = self._res_map[self.region_name][constants.RT_PORT_CHAIN]
|
||||
for fc in body[_type]['flow_classifiers']:
|
||||
for pc in pc_list:
|
||||
if fc in pc['flow_classifiers']:
|
||||
raise client_exceptions.Conflict(constants.STR_IN_USE)
|
||||
|
||||
return super(FakeClient, self).create_resources(_type, ctx, body)
|
||||
|
||||
def get_port_chains(self, ctx, portchain_id):
|
||||
return self.get_resource('port_chain', ctx, portchain_id)
|
||||
|
||||
def get_port_pair_groups(self, ctx, portpairgroup_id):
|
||||
return self.get_resource('port_pair_group', ctx, portpairgroup_id)
|
||||
|
||||
def get_flow_classifiers(self, ctx, flowclassifier_id):
|
||||
return self.get_resource('flow_classifier', ctx, flowclassifier_id)
|
||||
|
||||
def list_port_pairs(self, ctx, filters=None):
|
||||
return self.list_resources('port_pair', ctx, filters)
|
||||
|
||||
def list_flow_classifiers(self, ctx, filters=None):
|
||||
return self.list_resources('flow_classifier', ctx, filters)
|
||||
|
||||
def list_port_chains(self, ctx, filters=None):
|
||||
return self.list_resources('port_chain', ctx, filters)
|
||||
|
||||
def list_port_pair_groups(self, ctx, filters=None):
|
||||
return self.list_resources('port_pair_group', ctx, filters)
|
||||
|
||||
def update_port_pair_groups(self, ctx, id, port_pair_group):
|
||||
filters = [{'key': 'portpairgroup_id',
|
||||
'comparator': 'eq',
|
||||
'value': id}]
|
||||
pps = self.list_port_pairs(ctx, filters)
|
||||
for pp in pps:
|
||||
pp['portpairgroup_id'] = None
|
||||
return self.update_resources('port_pair_group',
|
||||
ctx, id, port_pair_group)
|
||||
|
||||
def get_ports(self, ctx, port_id):
|
||||
return self.get_resource('port', ctx, port_id)
|
||||
|
||||
def delete_port_chains(self, context, portchain_id):
|
||||
pc = self.get_resource('port_chain', context, portchain_id)
|
||||
if not pc:
|
||||
raise client_exceptions.NotFound()
|
||||
self.delete_resources('port_chain', context, portchain_id)
|
||||
|
||||
def delete_port_pairs(self, context, portpair_id):
|
||||
pp = self.get_resource('port_pair', context, portpair_id)
|
||||
if not pp:
|
||||
raise client_exceptions.NotFound()
|
||||
pp = self.get_resource('port_pair', context, portpair_id)
|
||||
if pp and pp.get('portpairgroup_id'):
|
||||
raise client_exceptions.Conflict("in use")
|
||||
self.delete_resources('port_pair', context, portpair_id)
|
||||
|
||||
def delete_port_pair_groups(self, context, portpairgroup_id):
|
||||
ppg = self.get_resource('port_pair_group', context, portpairgroup_id)
|
||||
if not ppg:
|
||||
raise client_exceptions.NotFound()
|
||||
for pc in BOTTOM1_PORTCHAINS:
|
||||
if portpairgroup_id in pc['port_pair_groups']:
|
||||
raise client_exceptions.Conflict("in use")
|
||||
self.delete_resources('port_pair_group', context, portpairgroup_id)
|
||||
|
||||
def delete_flow_classifiers(self, context, flowclassifier_id):
|
||||
fc = self.get_resource('flow_classifier', context, flowclassifier_id)
|
||||
if not fc:
|
||||
raise client_exceptions.NotFound()
|
||||
for pc in BOTTOM1_PORTCHAINS:
|
||||
if flowclassifier_id in pc['flow_classifiers']:
|
||||
raise client_exceptions.Conflict("in use")
|
||||
self.delete_resources('flow_classifier', context, flowclassifier_id)
|
||||
|
||||
|
||||
class FakeNeutronContext(q_context.Context):
|
||||
def __init__(self):
|
||||
self._session = None
|
||||
self.is_admin = True
|
||||
self.is_advsvc = False
|
||||
self.tenant_id = TEST_TENANT_ID
|
||||
|
||||
@property
|
||||
def session(self):
|
||||
if not self._session:
|
||||
self._session = FakeSession()
|
||||
return self._session
|
||||
|
||||
def elevated(self):
|
||||
return self
|
||||
|
||||
|
||||
class FakeSession(test_utils.FakeSession):
|
||||
|
||||
def _fill_port_chain_dict(self, port_chain, model_dict, fields=None):
|
||||
model_dict['port_pair_groups'] = [
|
||||
assoc['portpairgroup_id']
|
||||
for assoc in port_chain['chain_group_associations']]
|
||||
model_dict['flow_classifiers'] = [
|
||||
assoc['flowclassifier_id']
|
||||
for assoc in port_chain['chain_classifier_associations']]
|
||||
|
||||
def add_hook(self, model_obj, model_dict):
|
||||
if model_obj.__tablename__ == 'sfc_port_chains':
|
||||
self._fill_port_chain_dict(model_obj, model_dict)
|
||||
|
||||
|
||||
class FakeDriver(object):
|
||||
def __init__(self, driver, name):
|
||||
self.obj = driver
|
||||
self.name = name
|
||||
|
||||
|
||||
class FakeSfcDriver(sfc_driver.TricircleSfcDriver):
|
||||
def __init__(self):
|
||||
self.xjob_handler = FakeRPCAPI(self)
|
||||
self.helper = helper.NetworkHelper(self)
|
||||
|
||||
def _get_client(self, region_name):
|
||||
return FakeClient(region_name)
|
||||
|
||||
|
||||
class FakeFcDriver(fc_driver.TricircleFcDriver):
|
||||
def __init__(self):
|
||||
self.xjob_handler = FakeRPCAPI(self)
|
||||
self.helper = helper.NetworkHelper(self)
|
||||
|
||||
def _get_client(self, region_name):
|
||||
return FakeClient(region_name)
|
||||
|
||||
|
||||
class FakeFcPlugin(fc_plugin.FlowClassifierPlugin):
|
||||
def __init__(self):
|
||||
super(FakeFcPlugin, self).__init__()
|
||||
self.driver_manager.ordered_drivers = [FakeDriver(
|
||||
FakeFcDriver(), "tricircle_fc")]
|
||||
|
||||
|
||||
class FakeSfcPlugin(sfc_plugin.TricircleSfcPlugin):
|
||||
def __init__(self):
|
||||
super(FakeSfcPlugin, self).__init__()
|
||||
self.driver_manager.ordered_drivers = [FakeDriver(
|
||||
FakeSfcDriver(), "tricircle_sfc")]
|
||||
|
||||
def _get_client(self, region_name):
|
||||
return FakeClient(region_name)
|
||||
|
||||
def get_port_pairs(self, context, filters=None):
|
||||
client = self._get_client('top')
|
||||
_filter = []
|
||||
for key, values in six.iteritems(filters):
|
||||
for v in values:
|
||||
_filter.append(
|
||||
{'key': key, 'comparator': 'eq', 'value': v})
|
||||
return client.list_resources('port_pair', context, _filter)
|
||||
|
||||
def get_port_chain(self, context, id, fields=None):
|
||||
client = self._get_client('top')
|
||||
filter = [{'key': 'id', 'comparator': 'eq', 'value': id}]
|
||||
portchains = client.list_resources('port_chain', context, filter)
|
||||
if portchains:
|
||||
return portchains[0]
|
||||
return None
|
||||
|
||||
|
||||
def fake_get_context_from_neutron_context(q_context):
|
||||
ctx = context.get_db_context()
|
||||
ctx.project_id = q_context.project_id
|
||||
return ctx
|
||||
|
||||
|
||||
def fake_make_port_pair_group_dict(self, port_pair_group, fields=None):
|
||||
return port_pair_group
|
||||
|
||||
|
||||
def fake_make_port_pair_dict(self, port_pair, fields=None):
|
||||
return port_pair
|
||||
|
||||
|
||||
class FakeCorePlugin(central_plugin.TricirclePlugin):
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
def get_port(self, ctx, _id):
|
||||
return self._get_port(ctx, _id)
|
||||
|
||||
def _get_port(self, ctx, _id):
|
||||
top_client = FakeClient()
|
||||
_filters = [{'key': 'id', 'comparator': 'eq', 'value': _id}]
|
||||
return top_client.list_resources('port', ctx, _filters)[0]
|
||||
|
||||
|
||||
def fake_get_plugin(alias='core'):
|
||||
if alias == 'sfc':
|
||||
return FakeSfcPlugin()
|
||||
return FakeCorePlugin()
|
||||
|
||||
|
||||
class PluginTest(unittest.TestCase):
|
||||
def setUp(self):
|
||||
core.initialize()
|
||||
core.ModelBase.metadata.create_all(core.get_engine())
|
||||
core.get_engine().execute('pragma foreign_keys=on')
|
||||
self.context = context.Context()
|
||||
xmanager.IN_TEST = True
|
||||
directory.get_plugin = fake_get_plugin
|
||||
|
||||
def _basic_pod_setup(self):
|
||||
pod1 = {'pod_id': 'pod_id_1',
|
||||
'region_name': 'pod_1',
|
||||
'az_name': 'az_name_1'}
|
||||
pod2 = {'pod_id': 'pod_id_2',
|
||||
'region_name': 'pod_2',
|
||||
'az_name': 'az_name_2'}
|
||||
pod3 = {'pod_id': 'pod_id_0',
|
||||
'region_name': 'top_pod',
|
||||
'az_name': ''}
|
||||
for pod in (pod1, pod2, pod3):
|
||||
db_api.create_pod(self.context, pod)
|
||||
|
||||
def _prepare_net_test(self, project_id, ctx, pod_name):
|
||||
t_net_id = uuidutils.generate_uuid()
|
||||
pod_id = 'pod_id_1' if pod_name == 'pod_1' else 'pod_id_2'
|
||||
core.create_resource(ctx, models.ResourceRouting,
|
||||
{'top_id': t_net_id,
|
||||
'bottom_id': t_net_id,
|
||||
'pod_id': pod_id,
|
||||
'project_id': project_id,
|
||||
'resource_type': constants.RT_NETWORK})
|
||||
return t_net_id
|
||||
|
||||
def _prepare_port_test(self, tenant_id, ctx, pod_name, net_id):
|
||||
t_port_id = uuidutils.generate_uuid()
|
||||
t_port = {
|
||||
'id': t_port_id,
|
||||
'network_id': net_id
|
||||
}
|
||||
TOP_PORTS.append(DotDict(t_port))
|
||||
b_port = {
|
||||
'id': t_port_id,
|
||||
'network_id': net_id
|
||||
}
|
||||
if pod_name == 'pod_1':
|
||||
BOTTOM1_PORTS.append(DotDict(b_port))
|
||||
else:
|
||||
BOTTOM2_PORTS.append(DotDict(b_port))
|
||||
|
||||
pod_id = 'pod_id_1' if pod_name == 'pod_1' else 'pod_id_2'
|
||||
core.create_resource(ctx, models.ResourceRouting,
|
||||
{'top_id': t_port_id,
|
||||
'bottom_id': t_port_id,
|
||||
'pod_id': pod_id,
|
||||
'project_id': tenant_id,
|
||||
'resource_type': constants.RT_PORT})
|
||||
|
||||
return t_port_id
|
||||
|
||||
def _update_port_pair_test(self, ppg_mappings, port_pairs):
|
||||
for pp_id, ppg_id in six.iteritems(ppg_mappings):
|
||||
for pp in port_pairs:
|
||||
if pp['id'] == pp_id:
|
||||
pp['portpairgroup_id'] = ppg_id
|
||||
|
||||
def _prepare_port_pair_test(self, project_id, t_ctx, pod_name,
|
||||
index, ingress, egress, create_bottom,
|
||||
portpairgroup_id=None):
|
||||
t_pp_id = uuidutils.generate_uuid()
|
||||
b_pp_id = uuidutils.generate_uuid()
|
||||
top_pp = {
|
||||
'id': t_pp_id,
|
||||
'project_id': project_id,
|
||||
'tenant_id': project_id,
|
||||
'ingress': ingress,
|
||||
'egress': egress,
|
||||
'name': 'top_pp_%d' % index,
|
||||
'service_function_parameters': {"weight": 1, "correlation": None},
|
||||
'description': "description",
|
||||
'portpairgroup_id': portpairgroup_id
|
||||
}
|
||||
TOP_PORTPAIRS.append(DotDict(top_pp))
|
||||
if create_bottom:
|
||||
btm_pp = {
|
||||
'id': b_pp_id,
|
||||
'project_id': project_id,
|
||||
'tenant_id': project_id,
|
||||
'ingress': ingress,
|
||||
'egress': egress,
|
||||
'name': 'btm_pp_%d' % index,
|
||||
'service_function_parameters': {"weight": 1,
|
||||
"correlation": None},
|
||||
'description': "description",
|
||||
'portpairgroup_id': portpairgroup_id
|
||||
}
|
||||
if pod_name == 'pod_1':
|
||||
BOTTOM1_PORTPAIRS.append(DotDict(btm_pp))
|
||||
else:
|
||||
BOTTOM2_PORTPAIRS.append(DotDict(btm_pp))
|
||||
|
||||
pod_id = 'pod_id_1' if pod_name == 'pod_1' else 'pod_id_2'
|
||||
core.create_resource(t_ctx, models.ResourceRouting,
|
||||
{'top_id': t_pp_id,
|
||||
'bottom_id': b_pp_id,
|
||||
'pod_id': pod_id,
|
||||
'project_id': project_id,
|
||||
'resource_type': constants.RT_PORT_PAIR})
|
||||
|
||||
return t_pp_id, b_pp_id
|
||||
|
||||
def _prepare_port_pair_group_test(self, project_id, t_ctx, pod_name, index,
|
||||
t_pp_ids, create_bottom, b_pp_ids):
|
||||
t_ppg_id = uuidutils.generate_uuid()
|
||||
b_ppg_id = uuidutils.generate_uuid()
|
||||
|
||||
top_ppg = {
|
||||
"group_id": 1,
|
||||
"description": "",
|
||||
"tenant_id": project_id,
|
||||
"port_pair_group_parameters": {"lb_fields": []},
|
||||
"port_pairs": t_pp_ids,
|
||||
"project_id": project_id,
|
||||
"id": t_ppg_id,
|
||||
"name": 'top_ppg_%d' % index}
|
||||
TOP_PORTPAIRGROUPS.append(DotDict(top_ppg))
|
||||
if create_bottom:
|
||||
btm_ppg = {
|
||||
"group_id": 1,
|
||||
"description": "",
|
||||
"tenant_id": project_id,
|
||||
"port_pair_group_parameters": {"lb_fields": []},
|
||||
"port_pairs": b_pp_ids,
|
||||
"project_id": project_id,
|
||||
"id": b_ppg_id,
|
||||
"name": 'btm_ppg_%d' % index}
|
||||
if pod_name == 'pod_1':
|
||||
BOTTOM1_PORTPAIRGROUPS.append(DotDict(btm_ppg))
|
||||
else:
|
||||
BOTTOM2_PORTPAIRGROUPS.append(DotDict(btm_ppg))
|
||||
|
||||
pod_id = 'pod_id_1' if pod_name == 'pod_1' else 'pod_id_2'
|
||||
core.create_resource(t_ctx, models.ResourceRouting,
|
||||
{'top_id': t_ppg_id,
|
||||
'bottom_id': b_ppg_id,
|
||||
'pod_id': pod_id,
|
||||
'project_id': project_id,
|
||||
'resource_type':
|
||||
constants.RT_PORT_PAIR_GROUP})
|
||||
|
||||
return t_ppg_id, b_ppg_id
|
||||
|
||||
def _prepare_flow_classifier_test(self, project_id, t_ctx, pod_name,
|
||||
index, src_port_id, create_bottom):
|
||||
t_fc_id = uuidutils.generate_uuid()
|
||||
b_fc_id = uuidutils.generate_uuid()
|
||||
|
||||
top_fc = {
|
||||
"source_port_range_min": None,
|
||||
"destination_ip_prefix": None,
|
||||
"protocol": None,
|
||||
"description": "",
|
||||
"l7_parameters": {},
|
||||
"source_port_range_max": None,
|
||||
"id": t_fc_id,
|
||||
"name": "t_fc_%s" % index,
|
||||
"ethertype": "IPv4",
|
||||
"tenant_id": project_id,
|
||||
"source_ip_prefix": "1.0.0.0/24",
|
||||
"logical_destination_port": None,
|
||||
"destination_port_range_min": None,
|
||||
"destination_port_range_max": None,
|
||||
"project_id": project_id,
|
||||
"logical_source_port": src_port_id}
|
||||
|
||||
TOP_FLOWCLASSIFIERS.append(DotDict(top_fc))
|
||||
if create_bottom:
|
||||
btm_fc = {
|
||||
"source_port_range_min": None,
|
||||
"destination_ip_prefix": None,
|
||||
"protocol": None,
|
||||
"description": "",
|
||||
"l7_parameters": {},
|
||||
"source_port_range_max": None,
|
||||
"id": b_fc_id,
|
||||
"name": "b_fc_%s" % index,
|
||||
"ethertype": "IPv4",
|
||||
"tenant_id": project_id,
|
||||
"source_ip_prefix": "1.0.0.0/24",
|
||||
"logical_destination_port": None,
|
||||
"destination_port_range_min": None,
|
||||
"destination_port_range_max": None,
|
||||
"project_id": project_id,
|
||||
"logical_source_port": src_port_id}
|
||||
if pod_name == 'pod_1':
|
||||
BOTTOM1_FLOWCLASSIFIERS.append(DotDict(btm_fc))
|
||||
else:
|
||||
BOTTOM2_FLOWCLASSIFIERS.append(DotDict(btm_fc))
|
||||
|
||||
pod_id = 'pod_id_1' if pod_name == 'pod_1' else 'pod_id_2'
|
||||
core.create_resource(t_ctx, models.ResourceRouting,
|
||||
{'top_id': t_fc_id,
|
||||
'bottom_id': b_fc_id,
|
||||
'pod_id': pod_id,
|
||||
'project_id': project_id,
|
||||
'resource_type':
|
||||
constants.RT_FLOW_CLASSIFIER})
|
||||
|
||||
return t_fc_id, b_fc_id
|
||||
|
||||
def _prepare_port_chain_test(self, project_id, t_ctx, pod_name,
|
||||
index, create_bottom, ids):
|
||||
t_pc_id = uuidutils.generate_uuid()
|
||||
b_pc_id = uuidutils.generate_uuid()
|
||||
|
||||
top_pc = {
|
||||
"tenant_id": project_id,
|
||||
"name": "t_pc_%s" % index,
|
||||
"chain_parameters": {
|
||||
"symmetric": False, "correlation": "mpls"},
|
||||
"port_pair_groups": ids['t_ppg_id'],
|
||||
"flow_classifiers": ids['t_fc_id'],
|
||||
"project_id": project_id,
|
||||
"chain_id": 1,
|
||||
"description": "",
|
||||
"id": t_pc_id}
|
||||
|
||||
TOP_PORTCHAINS.append(DotDict(top_pc))
|
||||
if create_bottom:
|
||||
btm_pc = {
|
||||
"tenant_id": project_id,
|
||||
"name": "b_pc_%s" % index,
|
||||
"chain_parameters": {
|
||||
"symmetric": False, "correlation": "mpls"},
|
||||
"port_pair_groups": ids['b_ppg_id'],
|
||||
"flow_classifiers": ids['b_fc_id'],
|
||||
"project_id": project_id,
|
||||
"chain_id": 1,
|
||||
"description": "",
|
||||
"id": b_pc_id}
|
||||
if pod_name == 'pod_1':
|
||||
BOTTOM1_PORTCHAINS.append(DotDict(btm_pc))
|
||||
else:
|
||||
BOTTOM2_PORTCHAINS.append(DotDict(btm_pc))
|
||||
|
||||
pod_id = 'pod_id_1' if pod_name == 'pod_1' else 'pod_id_2'
|
||||
core.create_resource(t_ctx, models.ResourceRouting,
|
||||
{'top_id': t_pc_id,
|
||||
'bottom_id': b_pc_id,
|
||||
'pod_id': pod_id,
|
||||
'project_id': project_id,
|
||||
'resource_type': constants.RT_PORT_CHAIN})
|
||||
|
||||
return t_pc_id, b_pc_id
|
||||
|
||||
def test_get_client(self):
|
||||
driver = fc_driver.TricircleFcDriver()
|
||||
t_client = driver._get_client('top')
|
||||
self.assertEqual(t_client.region_name, 'top')
|
||||
|
||||
@patch.object(context, 'get_context_from_neutron_context',
|
||||
new=fake_get_context_from_neutron_context)
|
||||
@patch.object(directory, 'get_plugin', new=fake_get_plugin)
|
||||
def test_get_port(self):
|
||||
self._basic_pod_setup()
|
||||
project_id = TEST_TENANT_ID
|
||||
fake_plugin = FakeSfcPlugin()
|
||||
t_ctx = context.get_db_context()
|
||||
port_id = self._prepare_port_test(project_id, t_ctx, 'pod_1', None)
|
||||
port = fake_plugin._get_port(context, port_id)
|
||||
self.assertIsNotNone(port)
|
||||
|
||||
@patch.object(db_base_plugin_v2.NeutronDbPluginV2, 'get_port',
|
||||
new=FakeCorePlugin.get_port)
|
||||
@patch.object(sfc_db.SfcDbPlugin, 'get_port_pairs',
|
||||
new=FakeSfcPlugin.get_port_pairs)
|
||||
@patch.object(context, 'get_context_from_neutron_context',
|
||||
new=fake_get_context_from_neutron_context)
|
||||
def test_create_port_chain(self):
|
||||
project_id = TEST_TENANT_ID
|
||||
q_ctx = FakeNeutronContext()
|
||||
t_ctx = context.get_db_context()
|
||||
self._basic_pod_setup()
|
||||
fake_plugin = FakeSfcPlugin()
|
||||
|
||||
t_net_id = self._prepare_net_test(project_id, t_ctx, 'pod_1')
|
||||
ingress = self._prepare_port_test(project_id, t_ctx, 'pod_1', t_net_id)
|
||||
egress = self._prepare_port_test(project_id, t_ctx, 'pod_1', t_net_id)
|
||||
src_port_id = self._prepare_port_test(project_id,
|
||||
t_ctx, 'pod_1', t_net_id)
|
||||
t_pp1_id, _ = self._prepare_port_pair_test(
|
||||
project_id, t_ctx, 'pod_1', 0, ingress, egress, False)
|
||||
t_ppg1_id, _ = self._prepare_port_pair_group_test(
|
||||
project_id, t_ctx, 'pod_1', 0, [t_pp1_id], False, None)
|
||||
ppg1_mapping = {t_pp1_id: t_ppg1_id}
|
||||
self._update_port_pair_test(ppg1_mapping, TOP_PORTPAIRS)
|
||||
t_fc1_id, _ = self._prepare_flow_classifier_test(
|
||||
project_id, t_ctx, 'pod_1', 0, src_port_id, False)
|
||||
body = {"port_chain": {
|
||||
"tenant_id": project_id,
|
||||
"name": "pc1",
|
||||
"chain_parameters": {
|
||||
"symmetric": False, "correlation": "mpls"},
|
||||
"port_pair_groups": [t_ppg1_id],
|
||||
"flow_classifiers": [t_fc1_id],
|
||||
"project_id": project_id,
|
||||
"chain_id": 1,
|
||||
"description": ""}}
|
||||
t_pc1 = fake_plugin.create_port_chain(q_ctx, body)
|
||||
pp1_mappings = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, t_pp1_id, constants.RT_PORT_PAIR)
|
||||
ppg1_mappings = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, t_ppg1_id, constants.RT_PORT_PAIR_GROUP)
|
||||
fc1_mappings = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, t_fc1_id, constants.RT_FLOW_CLASSIFIER)
|
||||
pc1_mappings = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, t_pc1['id'], constants.RT_PORT_CHAIN)
|
||||
btm1_pp_ids = [btm_pp['id'] for btm_pp in BOTTOM1_PORTPAIRS]
|
||||
btm1_ppg_ids = [btm_ppg['id'] for btm_ppg in BOTTOM1_PORTPAIRGROUPS]
|
||||
btm1_fc_ids = [btm_fc['id'] for btm_fc in BOTTOM1_FLOWCLASSIFIERS]
|
||||
btm1_pc_ids = [btm_pc['id'] for btm_pc in BOTTOM1_PORTCHAINS]
|
||||
b_pp1_id = pp1_mappings[0][1]
|
||||
b_ppg1_id = ppg1_mappings[0][1]
|
||||
b_fc1_id = fc1_mappings[0][1]
|
||||
b_pc1_id = pc1_mappings[0][1]
|
||||
self.assertEqual([b_pp1_id], btm1_pp_ids)
|
||||
self.assertEqual([b_ppg1_id], btm1_ppg_ids)
|
||||
self.assertEqual([b_fc1_id], btm1_fc_ids)
|
||||
self.assertEqual([b_pc1_id], btm1_pc_ids)
|
||||
|
||||
# make conflict
|
||||
TOP_PORTCHAINS.pop()
|
||||
TOP_FLOWCLASSIFIERS.pop()
|
||||
TOP_PORTPAIRGROUPS.pop()
|
||||
TOP_PORTPAIRS.pop()
|
||||
b_ppg1_mapping = {b_pp1_id: b_ppg1_id}
|
||||
self._update_port_pair_test(b_ppg1_mapping, BOTTOM1_PORTPAIRS)
|
||||
db_api.create_recycle_resource(
|
||||
t_ctx, t_ppg1_id, constants.RT_PORT_PAIR_GROUP, q_ctx.project_id)
|
||||
|
||||
t_pp2_id, _ = self._prepare_port_pair_test(
|
||||
project_id, t_ctx, 'pod_1', 0, ingress, egress, False)
|
||||
t_ppg2_id, _ = self._prepare_port_pair_group_test(
|
||||
project_id, t_ctx, 'pod_1', 0, [t_pp2_id], False, None)
|
||||
ppg2_mapping = {t_pp2_id: t_ppg2_id}
|
||||
self._update_port_pair_test(ppg2_mapping, TOP_PORTPAIRS)
|
||||
t_fc2_id, _ = self._prepare_flow_classifier_test(
|
||||
project_id, t_ctx, 'pod_1', 0, src_port_id, False)
|
||||
body2 = {"port_chain": {
|
||||
"tenant_id": project_id,
|
||||
"name": "pc1",
|
||||
"chain_parameters": {
|
||||
"symmetric": False, "correlation": "mpls"},
|
||||
"port_pair_groups": [t_ppg2_id],
|
||||
"flow_classifiers": [t_fc2_id],
|
||||
"project_id": project_id,
|
||||
"chain_id": 1,
|
||||
"description": ""}}
|
||||
t_pc2 = fake_plugin.create_port_chain(q_ctx, body2)
|
||||
pp2_mappings = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, t_pp2_id, constants.RT_PORT_PAIR)
|
||||
ppg2_mappings = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, t_ppg2_id, constants.RT_PORT_PAIR_GROUP)
|
||||
fc2_mappings = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, t_fc2_id, constants.RT_FLOW_CLASSIFIER)
|
||||
pc2_mappings = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, t_pc2['id'], constants.RT_PORT_CHAIN)
|
||||
btm1_pp_ids = [btm_pp['id'] for btm_pp in BOTTOM1_PORTPAIRS]
|
||||
btm1_ppg_ids = [btm_ppg['id'] for btm_ppg in BOTTOM1_PORTPAIRGROUPS]
|
||||
btm1_fc_ids = [btm_fc['id'] for btm_fc in BOTTOM1_FLOWCLASSIFIERS]
|
||||
btm1_pc_ids = [btm_pc['id'] for btm_pc in BOTTOM1_PORTCHAINS]
|
||||
b_pp2_id = pp2_mappings[0][1]
|
||||
b_ppg2_id = ppg2_mappings[0][1]
|
||||
b_fc2_id = fc2_mappings[0][1]
|
||||
b_pc2_id = pc2_mappings[0][1]
|
||||
self.assertEqual([b_pp2_id], btm1_pp_ids)
|
||||
self.assertEqual([b_ppg2_id], btm1_ppg_ids)
|
||||
self.assertEqual([b_fc2_id], btm1_fc_ids)
|
||||
self.assertEqual([b_pc2_id], btm1_pc_ids)
|
||||
|
||||
@patch.object(context, 'get_context_from_neutron_context',
|
||||
new=fake_get_context_from_neutron_context)
|
||||
def test_delete_port_chain(self):
|
||||
project_id = TEST_TENANT_ID
|
||||
q_ctx = FakeNeutronContext()
|
||||
t_ctx = context.get_db_context()
|
||||
self._basic_pod_setup()
|
||||
fake_plugin = FakeSfcPlugin()
|
||||
ids = {'t_ppg_id': [uuidutils.generate_uuid()],
|
||||
'b_ppg_id': [uuidutils.generate_uuid()],
|
||||
't_fc_id': [uuidutils.generate_uuid()],
|
||||
'b_fc_id': [uuidutils.generate_uuid()]}
|
||||
t_pc_id1, _ = self._prepare_port_chain_test(
|
||||
project_id, t_ctx, 'pod_1', 0, True, ids)
|
||||
|
||||
fake_plugin.delete_port_chain(q_ctx, t_pc_id1)
|
||||
pc_mappings = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, t_pc_id1, constants.RT_PORT_CHAIN)
|
||||
self.assertEqual(len(TOP_PORTCHAINS), 0)
|
||||
self.assertEqual(len(BOTTOM1_PORTCHAINS), 0)
|
||||
self.assertEqual(len(pc_mappings), 0)
|
||||
|
||||
t_pc_id2, _ = self._prepare_port_chain_test(
|
||||
project_id, t_ctx, 'pod_1', 0, True, ids)
|
||||
BOTTOM1_PORTCHAINS.pop()
|
||||
fake_plugin.delete_port_chain(q_ctx, t_pc_id2)
|
||||
pc_mappings = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, t_pc_id2, constants.RT_PORT_CHAIN)
|
||||
self.assertEqual(len(TOP_PORTCHAINS), 0)
|
||||
self.assertEqual(len(pc_mappings), 0)
|
||||
|
||||
@patch.object(sfc_db.SfcDbPlugin, '_make_port_pair_group_dict',
|
||||
new=fake_make_port_pair_group_dict)
|
||||
@patch.object(context, 'get_context_from_neutron_context',
|
||||
new=fake_get_context_from_neutron_context)
|
||||
def test_delete_port_pair_group(self):
|
||||
project_id = TEST_TENANT_ID
|
||||
q_ctx = FakeNeutronContext()
|
||||
t_ctx = context.get_db_context()
|
||||
self._basic_pod_setup()
|
||||
fake_plugin = FakeSfcPlugin()
|
||||
|
||||
t_pp_id = uuidutils.generate_uuid()
|
||||
b_pp_id = uuidutils.generate_uuid()
|
||||
|
||||
t_ppg_id1, _ = self._prepare_port_pair_group_test(
|
||||
project_id, t_ctx, 'pod_1', 0, [t_pp_id], True, [b_pp_id])
|
||||
fake_plugin.delete_port_pair_group(q_ctx, t_ppg_id1)
|
||||
ppg_mappings = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, t_ppg_id1, constants.RT_PORT_PAIR_GROUP)
|
||||
self.assertEqual(len(TOP_PORTPAIRGROUPS), 0)
|
||||
self.assertEqual(len(BOTTOM1_PORTPAIRGROUPS), 0)
|
||||
self.assertEqual(len(ppg_mappings), 0)
|
||||
|
||||
t_ppg_id2, _ = self._prepare_port_pair_group_test(
|
||||
project_id, t_ctx, 'pod_1', 0, [t_pp_id], True, [b_pp_id])
|
||||
BOTTOM1_PORTPAIRGROUPS.pop()
|
||||
fake_plugin.delete_port_pair_group(q_ctx, t_ppg_id2)
|
||||
ppg_mappings = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, t_ppg_id2, constants.RT_PORT_PAIR_GROUP)
|
||||
self.assertEqual(len(TOP_PORTPAIRGROUPS), 0)
|
||||
self.assertEqual(len(ppg_mappings), 0)
|
||||
|
||||
@patch.object(sfc_db.SfcDbPlugin, '_make_port_pair_dict',
|
||||
new=fake_make_port_pair_dict)
|
||||
@patch.object(context, 'get_context_from_neutron_context',
|
||||
new=fake_get_context_from_neutron_context)
|
||||
def test_delete_port_pair(self):
|
||||
project_id = TEST_TENANT_ID
|
||||
q_ctx = FakeNeutronContext()
|
||||
t_ctx = context.get_db_context()
|
||||
self._basic_pod_setup()
|
||||
fake_plugin = FakeSfcPlugin()
|
||||
|
||||
ingress = uuidutils.generate_uuid()
|
||||
egress = uuidutils.generate_uuid()
|
||||
t_pp1_id, _ = self._prepare_port_pair_test(
|
||||
project_id, t_ctx, 'pod_1', 0, ingress, egress, True)
|
||||
fake_plugin.delete_port_pair(q_ctx, t_pp1_id)
|
||||
ppg_mappings = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, t_pp1_id, constants.RT_PORT_PAIR_GROUP)
|
||||
self.assertEqual(len(TOP_PORTPAIRS), 0)
|
||||
self.assertEqual(len(BOTTOM1_PORTPAIRS), 0)
|
||||
self.assertEqual(len(ppg_mappings), 0)
|
||||
|
||||
t_pp2_id, _ = self._prepare_port_pair_test(
|
||||
project_id, t_ctx, 'pod_1', 0, ingress, egress, True)
|
||||
BOTTOM1_PORTPAIRS.pop()
|
||||
fake_plugin.delete_port_pair(q_ctx, t_pp2_id)
|
||||
ppg_mappings = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, t_pp2_id, constants.RT_PORT_PAIR_GROUP)
|
||||
self.assertEqual(len(TOP_PORTPAIRS), 0)
|
||||
self.assertEqual(len(ppg_mappings), 0)
|
||||
|
||||
@patch.object(context, 'get_context_from_neutron_context',
|
||||
new=fake_get_context_from_neutron_context)
|
||||
def test_delete_flow_classifier(self):
|
||||
project_id = TEST_TENANT_ID
|
||||
q_ctx = FakeNeutronContext()
|
||||
t_ctx = context.get_db_context()
|
||||
self._basic_pod_setup()
|
||||
fake_plugin = FakeFcPlugin()
|
||||
|
||||
src_port_id = uuidutils.generate_uuid()
|
||||
|
||||
t_fc_id1, _ = self._prepare_flow_classifier_test(
|
||||
project_id, t_ctx, 'pod_1', 0, src_port_id, True)
|
||||
fake_plugin.delete_flow_classifier(q_ctx, t_fc_id1)
|
||||
ppg_mappings = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, t_fc_id1, constants.RT_FLOW_CLASSIFIER)
|
||||
self.assertEqual(len(TOP_FLOWCLASSIFIERS), 0)
|
||||
self.assertEqual(len(BOTTOM1_FLOWCLASSIFIERS), 0)
|
||||
self.assertEqual(len(ppg_mappings), 0)
|
||||
|
||||
t_fc_id2, _ = self._prepare_flow_classifier_test(
|
||||
project_id, t_ctx, 'pod_1', 0, src_port_id, True)
|
||||
BOTTOM1_FLOWCLASSIFIERS.pop()
|
||||
fake_plugin.delete_flow_classifier(q_ctx, t_fc_id2)
|
||||
ppg_mappings = db_api.get_bottom_mappings_by_top_id(
|
||||
t_ctx, t_fc_id2, constants.RT_FLOW_CLASSIFIER)
|
||||
self.assertEqual(len(TOP_FLOWCLASSIFIERS), 0)
|
||||
self.assertEqual(len(ppg_mappings), 0)
|
||||
|
||||
def tearDown(self):
|
||||
core.ModelBase.metadata.drop_all(core.get_engine())
|
||||
test_utils.get_resource_store().clean()
|
||||
cfg.CONF.unregister_opts(q_config.core_opts)
|
||||
xmanager.IN_TEST = False
|
|
@ -111,7 +111,8 @@ class HelperTest(unittest.TestCase):
|
|||
'id': 'port-id-%d' % i,
|
||||
'fixed_ips': [{'ip_address': '10.0.1.%d' % i}],
|
||||
'mac_address': 'fa:16:3e:d4:01:%02x' % i,
|
||||
'binding:host_id': 'host1'
|
||||
'binding:host_id': 'host1',
|
||||
'device_id': None
|
||||
} for i in range(1, 20)]
|
||||
agents = [{'type': 'Open vSwitch agent',
|
||||
'tunnel_ip': '192.168.1.101'} for _ in range(1, 20)]
|
||||
|
|
|
@ -90,9 +90,18 @@ def list_resource(_type, is_top, filters=None):
|
|||
return ret
|
||||
|
||||
|
||||
class FakeTypeManager(object):
|
||||
|
||||
def __init__(self):
|
||||
self.drivers = {}
|
||||
|
||||
|
||||
class FakeCorePlugin(object):
|
||||
supported_extension_aliases = ['agent']
|
||||
|
||||
def __init__(self):
|
||||
self.type_manager = FakeTypeManager()
|
||||
|
||||
def create_network(self, context, network):
|
||||
create_resource('network', False, network['network'])
|
||||
return network['network']
|
||||
|
|
|
@ -15,6 +15,7 @@
|
|||
|
||||
import copy
|
||||
|
||||
from oslo_utils import uuidutils
|
||||
import six
|
||||
from sqlalchemy.orm import attributes
|
||||
from sqlalchemy.orm import exc
|
||||
|
@ -50,7 +51,11 @@ class ResourceStore(object):
|
|||
('dnsnameservers', None),
|
||||
('trunks', 'trunk'),
|
||||
('subports', None),
|
||||
('agents', 'agent')]
|
||||
('agents', 'agent'),
|
||||
('sfc_port_pairs', constants.RT_PORT_PAIR),
|
||||
('sfc_port_pair_groups', constants.RT_PORT_PAIR_GROUP),
|
||||
('sfc_port_chains', constants.RT_PORT_CHAIN),
|
||||
('sfc_flow_classifiers', constants.RT_FLOW_CLASSIFIER)]
|
||||
|
||||
def __init__(self):
|
||||
self.store_list = []
|
||||
|
@ -549,6 +554,8 @@ class FakeClient(object):
|
|||
def create_resources(self, _type, ctx, body):
|
||||
res_list = self._res_map[self.region_name][_type]
|
||||
res = dict(body[_type])
|
||||
if 'id' not in res:
|
||||
res['id'] = uuidutils.generate_uuid()
|
||||
res_list.append(res)
|
||||
return res
|
||||
|
||||
|
|
|
@ -809,6 +809,7 @@ class XManagerTest(unittest.TestCase):
|
|||
'device_owner': 'compute:None',
|
||||
'binding:vif_type': 'ovs',
|
||||
'binding:host_id': 'host1',
|
||||
'device_id': None,
|
||||
'mac_address': 'fa:16:3e:d4:01:03',
|
||||
'fixed_ips': [{'subnet_id': subnet1_id,
|
||||
'ip_address': '10.0.1.3'}]})
|
||||
|
@ -817,6 +818,7 @@ class XManagerTest(unittest.TestCase):
|
|||
'device_owner': 'compute:None',
|
||||
'binding:vif_type': 'ovs',
|
||||
'binding:host_id': 'host2',
|
||||
'device_id': None,
|
||||
'mac_address': 'fa:16:3e:d4:01:03',
|
||||
'fixed_ips': [{'subnet_id': subnet1_id,
|
||||
'ip_address': '10.0.1.4'}]})
|
||||
|
|
|
@ -19,6 +19,7 @@ import eventlet
|
|||
import netaddr
|
||||
import random
|
||||
import six
|
||||
import time
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log as logging
|
||||
|
@ -151,6 +152,9 @@ class XManager(PeriodicTasks):
|
|||
constants.JT_CONFIGURE_ROUTE: self.configure_route,
|
||||
constants.JT_ROUTER_SETUP: self.setup_bottom_router,
|
||||
constants.JT_PORT_DELETE: self.delete_server_port,
|
||||
constants.JT_SFC_SYNC:
|
||||
self.sync_service_function_chain,
|
||||
constants.JT_RESOURCE_RECYCLE: self.recycle_resources,
|
||||
constants.JT_SEG_RULE_SETUP: self.configure_security_group_rules,
|
||||
constants.JT_NETWORK_UPDATE: self.update_network,
|
||||
constants.JT_SUBNET_UPDATE: self.update_subnet,
|
||||
|
@ -1045,7 +1049,8 @@ class XManager(PeriodicTasks):
|
|||
{'key': 'fields', 'comparator': 'eq',
|
||||
'value': ['id', 'binding:vif_type',
|
||||
'binding:host_id', 'fixed_ips',
|
||||
'device_owner', 'mac_address']}])
|
||||
'device_owner', 'device_id',
|
||||
'mac_address']}])
|
||||
LOG.debug('Shadow ports %s in pod %s %s',
|
||||
b_sw_ports, target_pod_id, run_label)
|
||||
LOG.debug('Ports %s in pod %s %s',
|
||||
|
@ -1253,3 +1258,272 @@ class XManager(PeriodicTasks):
|
|||
except q_cli_exceptions.NotFound:
|
||||
LOG.error('trunk: %(trunk_id)s not found, pod name: %(name)s',
|
||||
{'trunk_id': b_trunk_id, 'name': b_region_name})
|
||||
|
||||
def _delete_port_pair_by_ingress(self, ctx, b_client, ingress, project_id):
|
||||
filters = [{'key': 'ingress',
|
||||
'comparator': 'eq',
|
||||
'value': ingress},
|
||||
{'key': 'project_id',
|
||||
'comparator': 'eq',
|
||||
'value': project_id}
|
||||
]
|
||||
pps = b_client.list_port_pairs(ctx, filters=filters)
|
||||
if not pps:
|
||||
return
|
||||
self._delete_bottom_resource_by_id(
|
||||
ctx, constants.RT_PORT_PAIR, pps[0]['id'],
|
||||
b_client, project_id)
|
||||
|
||||
def _delete_flow_classifier_by_src_port(self, ctx, b_client,
|
||||
port_id, project_id):
|
||||
filters = [{'key': 'logical_source_port',
|
||||
'comparator': 'eq',
|
||||
'value': port_id},
|
||||
{'key': 'project_id',
|
||||
'comparator': 'eq',
|
||||
'value': project_id}
|
||||
]
|
||||
fcs = b_client.list_flow_classifiers(ctx, filters=filters)
|
||||
if not fcs:
|
||||
return
|
||||
self._delete_bottom_resource_by_id(
|
||||
ctx, constants.RT_FLOW_CLASSIFIER, fcs[0]['id'],
|
||||
b_client, project_id)
|
||||
|
||||
def _delete_portchain_by_fc_id(self, ctx, b_client, fc_id, project_id):
|
||||
filters = [{'key': 'project_id',
|
||||
'comparator': 'eq',
|
||||
'value': project_id}]
|
||||
pcs = b_client.list_port_chains(ctx, filters=filters)
|
||||
for pc in pcs:
|
||||
if fc_id in pc['flow_classifiers']:
|
||||
self._delete_bottom_resource_by_id(
|
||||
ctx, constants.RT_PORT_CHAIN, pc['id'],
|
||||
b_client, project_id)
|
||||
return
|
||||
|
||||
def _clear_bottom_portpairgroup_portpairs(self, ctx, b_client,
|
||||
pp_ids, project_id):
|
||||
filters = [{'key': 'project_id',
|
||||
'comparator': 'eq',
|
||||
'value': project_id}]
|
||||
ppgs = b_client.list_port_pair_groups(ctx, filters=filters)
|
||||
for pp_id in pp_ids:
|
||||
for ppg in ppgs:
|
||||
if pp_id in ppg['port_pairs']:
|
||||
ppg_body = {'port_pair_group': {
|
||||
'port_pairs': []
|
||||
}}
|
||||
b_client.update_port_pair_groups(ctx, ppg['id'], ppg_body)
|
||||
break
|
||||
|
||||
def _delete_bottom_resource_by_id(self, ctx,
|
||||
res_type, res_id, b_client, project_id):
|
||||
try:
|
||||
b_client.delete_resources(res_type, ctx, res_id)
|
||||
except q_cli_exceptions.NotFound:
|
||||
LOG.debug(('%(res_type)s: %(id)s not found, '
|
||||
'region name: %(name)s'),
|
||||
{'res_type': res_type,
|
||||
'id': res_id,
|
||||
'name': b_client.region_name})
|
||||
except q_cli_exceptions.Conflict as e:
|
||||
if constants.STR_IN_USE in e.message:
|
||||
LOG.debug(('%(res_type)s: %(id)s in use, '
|
||||
'region name: %(name)s'),
|
||||
{'res_type': res_type,
|
||||
'id': res_id,
|
||||
'name': b_client.region_name})
|
||||
if res_type == constants.RT_FLOW_CLASSIFIER:
|
||||
self._delete_portchain_by_fc_id(
|
||||
ctx, b_client, res_id, project_id)
|
||||
self._delete_bottom_resource_by_id(
|
||||
ctx, constants.RT_FLOW_CLASSIFIER,
|
||||
res_id, b_client, project_id)
|
||||
# we are deleting the port pair, meaning that the port pair
|
||||
# should be no longer used, so we remove it from
|
||||
# its port pair group, if any.
|
||||
elif res_type == constants.RT_PORT_PAIR:
|
||||
self._clear_bottom_portpairgroup_portpairs(
|
||||
ctx, b_client, [res_id], project_id)
|
||||
self._delete_bottom_resource_by_id(
|
||||
ctx, constants.RT_PORT_PAIR,
|
||||
res_id, b_client, project_id)
|
||||
# conflict exception is not expected to be raised when
|
||||
# deleting port pair group, because port pair group is only
|
||||
# deleted during resource recycling, and we guarantee that
|
||||
# its port chain will be deleted before.
|
||||
# and, deleting port chain will not raise conflict exception
|
||||
else:
|
||||
raise
|
||||
else:
|
||||
raise
|
||||
db_api.delete_mappings_by_bottom_id(ctx, res_id)
|
||||
|
||||
@_job_handle(constants.JT_RESOURCE_RECYCLE)
|
||||
def recycle_resources(self, ctx, payload):
|
||||
project_id = payload[constants.JT_RESOURCE_RECYCLE]
|
||||
filters = [{'key': 'project_id',
|
||||
'comparator': 'eq',
|
||||
'value': project_id}]
|
||||
resources = db_api.list_recycle_resources(ctx, filters)
|
||||
if not resources:
|
||||
return
|
||||
max_retries = 4
|
||||
# recycle_resources is triggered at the end of the
|
||||
# sync_service_function_chain function, need to consider the
|
||||
# situation which recycle_resources has been run but
|
||||
# sync_service_function_chain function has not ended.
|
||||
filters = [{'key': 'type',
|
||||
'comparator': 'eq',
|
||||
'value': constants.JT_SFC_SYNC}]
|
||||
for i in range(max_retries):
|
||||
sync_sfc_job = db_api.list_jobs(ctx, filters)
|
||||
if sync_sfc_job:
|
||||
if i == max_retries - 1:
|
||||
return
|
||||
time.sleep(5)
|
||||
|
||||
res_map = collections.defaultdict(list)
|
||||
for res in resources:
|
||||
res_map[res['resource_type']].append(res['resource_id'])
|
||||
|
||||
resource_types = [constants.RT_PORT_CHAIN,
|
||||
constants.RT_FLOW_CLASSIFIER,
|
||||
constants.RT_PORT_PAIR_GROUP,
|
||||
constants.RT_PORT_PAIR]
|
||||
|
||||
for res_type in resource_types:
|
||||
for res_id in res_map[res_type]:
|
||||
b_resources = db_api.get_bottom_mappings_by_top_id(
|
||||
ctx, res_id, res_type)
|
||||
for b_pod, b_res_id in b_resources:
|
||||
b_client = self._get_client(b_pod['region_name'])
|
||||
self._delete_bottom_resource_by_id(
|
||||
ctx, res_type, b_res_id, b_client, ctx.project_id)
|
||||
db_api.delete_recycle_resource(ctx, res_id)
|
||||
|
||||
def _prepare_sfc_bottom_element(self, ctx, project_id, b_pod, ele,
|
||||
res_type, body, b_client, **kwargs):
|
||||
max_retries = 2
|
||||
for i in range(max_retries):
|
||||
try:
|
||||
_, b_res_id = self.helper.prepare_bottom_element(
|
||||
ctx, project_id, b_pod, ele, res_type, body)
|
||||
return b_res_id
|
||||
except q_cli_exceptions.BadRequest as e:
|
||||
if i == max_retries - 1:
|
||||
raise
|
||||
if (constants.STR_USED_BY not in e.message and
|
||||
constants.STR_CONFLICTS_WITH not in e.message):
|
||||
raise
|
||||
if res_type == constants.RT_PORT_PAIR:
|
||||
self._delete_port_pair_by_ingress(
|
||||
ctx, b_client, kwargs['ingress'], project_id)
|
||||
elif res_type == constants.RT_FLOW_CLASSIFIER:
|
||||
self._delete_flow_classifier_by_src_port(
|
||||
ctx, b_client, kwargs['logical_source_port'],
|
||||
project_id)
|
||||
else:
|
||||
raise
|
||||
except q_cli_exceptions.Conflict as e:
|
||||
if i == max_retries - 1:
|
||||
raise
|
||||
if constants.STR_IN_USE not in e.message:
|
||||
raise
|
||||
if res_type == constants.RT_PORT_PAIR_GROUP:
|
||||
self._clear_bottom_portpairgroup_portpairs(
|
||||
ctx, b_client, kwargs['port_pairs'], project_id)
|
||||
elif res_type == constants.RT_PORT_CHAIN:
|
||||
self._delete_portchain_by_fc_id(
|
||||
ctx, b_client, kwargs['fc_id'], project_id)
|
||||
else:
|
||||
raise
|
||||
|
||||
@_job_handle(constants.JT_SFC_SYNC)
|
||||
def sync_service_function_chain(self, ctx, payload):
|
||||
(b_pod_id, t_port_chain_id, net_id) = payload[
|
||||
constants.JT_SFC_SYNC].split('#')
|
||||
|
||||
if b_pod_id == constants.POD_NOT_SPECIFIED:
|
||||
mappings = db_api.get_bottom_mappings_by_top_id(
|
||||
ctx, net_id, constants.RT_NETWORK)
|
||||
b_pods = [mapping[0] for mapping in mappings]
|
||||
for b_pod in b_pods:
|
||||
self.xjob_handler.sync_service_function_chain(
|
||||
ctx, ctx.project_id, t_port_chain_id,
|
||||
net_id, b_pod['pod_id'])
|
||||
return
|
||||
|
||||
# abbreviation, pp: port pair, ppg: port pair group,
|
||||
# pc: port chain, fc: flow classifier
|
||||
t_client = self._get_client()
|
||||
t_pc = t_client.get_port_chains(ctx, t_port_chain_id)
|
||||
b_pod = db_api.get_pod(ctx, b_pod_id)
|
||||
region_name = b_pod['region_name']
|
||||
b_client = self._get_client(region_name)
|
||||
# delete action
|
||||
if not t_pc:
|
||||
self.xjob_handler.recycle_resources(ctx, ctx.project_id)
|
||||
return
|
||||
|
||||
t_pps = {}
|
||||
t_ppgs = []
|
||||
for ppg_id in t_pc['port_pair_groups']:
|
||||
ppg = t_client.get_port_pair_groups(ctx, ppg_id)
|
||||
if not ppg:
|
||||
LOG.error('port pair group: %(ppg_id)s not found, '
|
||||
'pod name: %(name)s', {'ppg_id': ppg_id,
|
||||
'name': region_name})
|
||||
raise
|
||||
t_ppgs.append(ppg)
|
||||
|
||||
for ppg in t_ppgs:
|
||||
filters = [{'key': 'portpairgroup_id',
|
||||
'comparator': 'eq',
|
||||
'value': ppg['id']}]
|
||||
pp = t_client.list_port_pairs(ctx, filters=filters)
|
||||
if pp:
|
||||
t_pps[ppg['id']] = pp
|
||||
b_pp_ids = {}
|
||||
for key, value in six.iteritems(t_pps):
|
||||
b_pp_ids[key] = []
|
||||
for pp in value:
|
||||
pp_id = pp.pop('id')
|
||||
b_pp_id = self._prepare_sfc_bottom_element(
|
||||
ctx, pp['project_id'], b_pod, {'id': pp_id},
|
||||
constants.RT_PORT_PAIR, {'port_pair': pp}, b_client,
|
||||
ingress=pp['ingress'])
|
||||
b_pp_ids[key].append(b_pp_id)
|
||||
|
||||
b_ppg_ids = []
|
||||
for ppg in t_ppgs:
|
||||
ppg['port_pairs'] = b_pp_ids.get(ppg['id'], [])
|
||||
ppg_id = ppg.pop('id')
|
||||
ppg.pop('group_id')
|
||||
b_ppg_id = self._prepare_sfc_bottom_element(
|
||||
ctx, ppg['project_id'], b_pod, {'id': ppg_id},
|
||||
constants.RT_PORT_PAIR_GROUP, {'port_pair_group': ppg},
|
||||
b_client, port_pairs=ppg['port_pairs'])
|
||||
b_ppg_ids.append(b_ppg_id)
|
||||
|
||||
b_fc_ids = []
|
||||
for fc_id in t_pc['flow_classifiers']:
|
||||
fc = t_client.get_flow_classifiers(ctx, fc_id)
|
||||
if fc:
|
||||
fc_id = fc.pop('id')
|
||||
b_fc_id = self._prepare_sfc_bottom_element(
|
||||
ctx, ppg['project_id'], b_pod, {'id': fc_id},
|
||||
constants.RT_FLOW_CLASSIFIER, {'flow_classifier': fc},
|
||||
b_client, logical_source_port=fc['logical_source_port'])
|
||||
b_fc_ids.append(b_fc_id)
|
||||
|
||||
t_pc.pop('id')
|
||||
t_pc['port_pair_groups'] = b_ppg_ids
|
||||
t_pc['flow_classifiers'] = b_fc_ids
|
||||
self._prepare_sfc_bottom_element(
|
||||
ctx, t_pc['project_id'], b_pod, {'id': t_port_chain_id},
|
||||
constants.RT_PORT_CHAIN, {'port_chain': t_pc}, b_client,
|
||||
fc_id=b_fc_ids[0])
|
||||
|
||||
self.xjob_handler.recycle_resources(ctx, t_pc['project_id'])
|
||||
|
|
Loading…
Reference in New Issue