RFE alarm monitor: Fix hardcoded metadata,add func. test

This patch will focus on:
1. Fixing hardcoded metadata
2. Adding functional test for alarm monitor
3. Refactoring tosca template for alarm monitor
4. Refactoring scaling in/out support in alarm monitor
5. Supporting multi-trigger

Partial-bug: #1630614
Change-Id: Ic5d0046d0dc0b4381713bda01c485cecae17abea
This commit is contained in:
doantungbk 2016-10-05 07:51:17 -07:00
parent a1dd7fd73d
commit 0eafa5b1bf
26 changed files with 1007 additions and 257 deletions

View File

@ -44,20 +44,81 @@ described firstly like other TOSCA templates in Tacker.
evaluations: 1
method: avg
comparison_operator: gt
action:
resize_compute:
action_name: respawn
actions: [respawn]
Alarm framework already supported the some default backend actions like
**respawn, log, and log_and_kill**.
**scaling, respawn, log, and log_and_kill**.
Tacker users could change the desired action as described in the above example.
Until now, the backend actions could be pointed to the specific policy which
is also described in TOSCA template like scaling policy. The integration between
alarming monitoring and auto-scaling was also supported by Alarm monitor in Tacker:
alarming monitoring and scaling was also supported by Alarm monitor in Tacker:
.. code-block:: yaml
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: Demo example
metadata:
template_name: sample-tosca-vnfd
topology_template:
node_templates:
VDU1:
type: tosca.nodes.nfv.VDU.Tacker
capabilities:
nfv_compute:
properties:
disk_size: 1 GB
mem_size: 512 MB
num_cpus: 2
properties:
image: cirros-0.3.4-x86_64-uec
mgmt_driver: noop
availability_zone: nova
metadata: {metering.vnf: SG1}
CP1:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1
VDU2:
type: tosca.nodes.nfv.VDU.Tacker
capabilities:
nfv_compute:
properties:
disk_size: 1 GB
mem_size: 512 MB
num_cpus: 2
properties:
image: cirros-0.3.4-x86_64-uec
mgmt_driver: noop
availability_zone: nova
metadata: {metering.vnf: SG1}
CP2:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU2
VL1:
type: tosca.nodes.nfv.VL
properties:
network_name: net_mgmt
vendor: Tacker
policies:
- SP1:
type: tosca.policies.tacker.Scaling
@ -67,12 +128,12 @@ alarming monitoring and auto-scaling was also supported by Alarm monitor in Tack
min_instances: 1
max_instances: 3
default_instances: 2
targets: [VDU1]
targets: [VDU1,VDU2]
- vdu1_cpu_usage_monitoring_policy:
- vdu_cpu_usage_monitoring_policy:
type: tosca.policies.tacker.Alarming
triggers:
resize_compute:
vdu_hcpu_usage_scaling_out:
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
@ -84,9 +145,28 @@ alarming monitoring and auto-scaling was also supported by Alarm monitor in Tack
evaluations: 1
method: avg
comparison_operator: gt
action:
resize_compute:
action_name: SP1
metadata: SG1
actions: [SP1]
vdu_lcpu_usage_scaling_in:
targets: [VDU1, VDU2]
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
metrics: cpu_util
condition:
threshold: 10
constraint: utilization less_than 10%
period: 600
evaluations: 1
method: avg
comparison_operator: lt
metadata: SG1
actions: [SP1]
**NOTE:**
metadata defined in VDU properties must be matched with metadata in monitoring policy
How to setup environment
~~~~~~~~~~~~~~~~~~~~~~~~
@ -112,45 +192,47 @@ is successfully passed to Ceilometer, Tacker users could use CLI:
.. code-block:: console
$ceilometer alarm-list
$aodh alarm list
+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------+----------+---------+------------+-------------------------------------+------------------+
| Alarm ID | Name | State | Severity | Enabled | Continuous | Alarm condition | Time constraints |
+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------+----------+---------+------------+-------------------------------------+------------------+
| f6a89242-d849-4a1a-9eb5-de4c0730252f | tacker.vnfm.infra_drivers.openstack.openstack_OpenStack-d4900104-6257-4084-8506-9fa6895d1294-vdu1_cpu_usage_monitoring_policy-7rt36gqbmuqo | insufficient data | low | True | True | avg(cpu_util) > 15.0 during 1 x 65s | None |
+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------+-------------------+----------+---------+------------+-------------------------------------+------------------
+--------------------------------------+-----------+--------------------------------------------------------------------------------------------------------------------------------------+-------------------+----------+---------+
| alarm_id | type | name | state | severity | enabled |
+--------------------------------------+-----------+--------------------------------------------------------------------------------------------------------------------------------------+-------------------+----------+---------+
| 6f2336b9-e0a2-4e33-88be-bc036192b42b | threshold | tacker.vnfm.infra_drivers.openstack.openstack_OpenStack-a0f60b00-ad3d-4769-92ef-e8d9518da2c8-vdu_lcpu_scaling_in-smgctfnc3ql5 | insufficient data | low | True |
| e049f0d3-09a8-46c0-9b88-e61f1f524aab | threshold | tacker.vnfm.infra_drivers.openstack.openstack_OpenStack-a0f60b00-ad3d-4769-92ef-e8d9518da2c8-vdu_hcpu_usage_scaling_out-lubylov5g6xb | insufficient data | low | True |
+--------------------------------------+-----------+--------------------------------------------------------------------------------------------------------------------------------------+-------------------+----------+---------+
.. code-block:: console
$ ceilometer alarm-show 35a80852-e24f-46ed-bd34-e2f831d00172
$aodh alarm show 6f2336b9-e0a2-4e33-88be-bc036192b42b
+---------------------------+--------------------------------------------------------------------------+
| Property | Value |
+---------------------------+--------------------------------------------------------------------------+
| alarm_actions | ["http://pinedcn:9890/v1.0/vnfs/d4900104-6257-4084-8506-9fa6895d1294/vdu |
| | 1_cpu_usage_monitoring_policy/SP1/i42kd018"] |
| alarm_id | f6a89242-d849-4a1a-9eb5-de4c0730252f |
| comparison_operator | gt |
| description | utilization greater_than 50% |
+---------------------------+-------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+---------------------------+-------------------------------------------------------------------------------------------------------------------------------+
| alarm_actions | [u'http://pinedcn:9890/v1.0/vnfs/a0f60b00-ad3d-4769-92ef-e8d9518da2c8/vdu_lcpu_scaling_in/SP1-in/yl7kh5qd'] |
| alarm_id | 6f2336b9-e0a2-4e33-88be-bc036192b42b |
| comparison_operator | lt |
| description | utilization less_than 10% |
| enabled | True |
| evaluation_periods | 1 |
| exclude_outliers | False |
| insufficient_data_actions | None |
| meter_name | cpu_util |
| name | tacker.vnfm.infra_drivers.openstack.openstack_OpenStack-d4900104-6257-40 |
| | 84-8506-9fa6895d1294-vdu1_cpu_usage_monitoring_policy-7rt36gqbmuqo |
| name | tacker.vnfm.infra_drivers.openstack.openstack_OpenStack-a0f60b00-ad3d-4769-92ef-e8d9518da2c8-vdu_lcpu_scaling_in-smgctfnc3ql5 |
| ok_actions | None |
| period | 65 |
| project_id | abdc74442be44b9486ca5e32a980bca1 |
| query | metadata.user_metadata.vnf_id == d4900104-6257-4084-8506-9fa6895d1294 |
| period | 600 |
| project_id | 3db801789c9e4b61b14ce448c9e7fb6d |
| query | metadata.user_metadata.vnf_id = a0f60b00-ad3d-4769-92ef-e8d9518da2c8 |
| repeat_actions | True |
| severity | low |
| state | insufficient data |
| state_timestamp | 2016-11-16T18:39:30.134954 |
| statistic | avg |
| threshold | 15.0 |
| threshold | 10.0 |
| time_constraints | [] |
| timestamp | 2016-11-16T18:39:30.134954 |
| type | threshold |
| user_id | 25a691398e534893b8627f3762712515 |
+---------------------------+--------------------------------------------------------------------------+
| user_id | a783e8a94768484fb9a43af03c6426cb |
+---------------------------+-------------------------------------------------------------------------------------------------------------------------------+
How to trigger alarms:
@ -167,7 +249,7 @@ Another way could be used to check if backend action is handled well in Tacker:
.. code-block::ini
curl -H "Content-Type: application/json" -X POST -d '{"alarm_id": "35a80852-e24f-46ed-bd34-e2f831d00172", "current": "alarm"}' http://ubuntu:9890/v1.0/vnfs/6f3e523d-9e12-4973-a2e8-ea04b9601253/vdu1_cpu_usage_monitoring_policy/respawn/g0jtsxu9
curl -H "Content-Type: application/json" -X POST -d '{"alarm_id": "35a80852-e24f-46ed-bd34-e2f831d00172", "current": "alarm"}' http://pinedcn:9890/v1.0/vnfs/a0f60b00-ad3d-4769-92ef-e8d9518da2c8/vdu_lcpu_scaling_in/SP1-in/yl7kh5qd
Then, users can check Horizon to know if vnf is respawned. Please note that the url used
in the above command could be captured from "**ceilometer alarm-show** command as shown before.

View File

@ -18,6 +18,7 @@ topology_template:
image: cirros-0.3.4-x86_64-uec
mgmt_driver: noop
availability_zone: nova
metadata: {metering.vnf: VDU1}
CP1:
type: tosca.nodes.nfv.CP.Tacker
@ -40,7 +41,7 @@ topology_template:
- vdu1_cpu_usage_monitoring_policy:
type: tosca.policies.tacker.Alarming
triggers:
resize_compute:
vdu_hcpu_usage_respawning:
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
@ -52,6 +53,5 @@ topology_template:
evaluations: 1
method: avg
comparison_operator: gt
action:
resize_compute:
action_name: respawn
metadata: VDU1
actions: [respawn]

View File

@ -18,6 +18,7 @@ topology_template:
image: cirros-0.3.4-x86_64-uec
mgmt_driver: noop
availability_zone: nova
metadata: {metering.vnf: SG1}
CP1:
type: tosca.nodes.nfv.CP.Tacker
@ -29,6 +30,30 @@ topology_template:
node: VL1
- virtualBinding:
node: VDU1
VDU2:
type: tosca.nodes.nfv.VDU.Tacker
capabilities:
nfv_compute:
properties:
disk_size: 1 GB
mem_size: 512 MB
num_cpus: 2
properties:
image: cirros-0.3.4-x86_64-uec
mgmt_driver: noop
availability_zone: nova
metadata: {metering.vnf: SG1}
CP2:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU2
VL1:
type: tosca.nodes.nfv.VL
@ -45,12 +70,12 @@ topology_template:
min_instances: 1
max_instances: 3
default_instances: 2
targets: [VDU1]
targets: [VDU1,VDU2]
- vdu1_cpu_usage_monitoring_policy:
- vdu_cpu_usage_monitoring_policy:
type: tosca.policies.tacker.Alarming
triggers:
resize_compute:
vdu_hcpu_usage_scaling_out:
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
@ -62,6 +87,21 @@ topology_template:
evaluations: 1
method: avg
comparison_operator: gt
action:
resize_compute:
action_name: SP1
metadata: SG1
actions: [SP1]
vdu_lcpu_usage_scaling_in:
targets: [VDU1, VDU2]
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
metrics: cpu_util
condition:
threshold: 10
constraint: utilization less_than 10%
period: 600
evaluations: 1
method: avg
comparison_operator: lt
metadata: SG1
actions: [SP1]

View File

@ -249,6 +249,10 @@ class AlarmUrlInvalid(BadRequest):
message = _("Invalid alarm url for VNF %(vnf_id)s")
class TriggerNotFound(NotFound):
message = _("Trigger %(trigger_name)s does not exist for VNF %(vnf_id)s")
class VnfPolicyNotFound(NotFound):
message = _("Policy %(policy)s does not exist for VNF %(vnf_id)s")

View File

@ -153,6 +153,10 @@ class VNFInactive(exceptions.InvalidInput):
message = _("VNF %(vnf_id)s is not in Active state %(message)s")
class MetadataNotMatched(exceptions.InvalidInput):
message = _("Metadata for alarm policy is not matched")
def _validate_service_type_list(data, valid_values=None):
if not isinstance(data, list):
msg = _("invalid data format for service list: '%s'") % data

View File

@ -10,6 +10,8 @@
# License for the specific language governing permissions and limitations
# under the License.
POLICY_ALARMING = 'tosca.policies.tacker.Alarming'
DEFAULT_ALARM_ACTIONS = ['respawn', 'log', 'log_and_kill', 'notify']
VNF_CIRROS_CREATE_TIMEOUT = 300
VNF_CIRROS_DELETE_TIMEOUT = 300
VNF_CIRROS_DEAD_TIMEOUT = 250

View File

@ -0,0 +1,57 @@
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: Demo example
metadata:
template_name: sample-tosca-vnfd
topology_template:
node_templates:
VDU1:
type: tosca.nodes.nfv.VDU.Tacker
capabilities:
nfv_compute:
properties:
disk_size: 1 GB
mem_size: 512 MB
num_cpus: 2
properties:
image: cirros-0.3.4-x86_64-uec
mgmt_driver: noop
availability_zone: nova
metadata: {metering.vnf: VDU1}
CP1:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1
VL1:
type: tosca.nodes.nfv.VL
properties:
network_name: net_mgmt
vendor: Tacker
policies:
- vdu1_cpu_usage_monitoring_policy:
type: tosca.policies.tacker.Alarming
triggers:
vdu_hcpu_usage_respawning:
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
metrics: cpu_util
condition:
threshold: 50
constraint: utilization greater_than 50%
period: 600
evaluations: 1
method: avg
comparison_operator: gt
metadata: VDU1
actions: [respawn]

View File

@ -0,0 +1,82 @@
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: Demo example
metadata:
template_name: sample-tosca-vnfd
topology_template:
node_templates:
VDU1:
type: tosca.nodes.nfv.VDU.Tacker
capabilities:
nfv_compute:
properties:
disk_size: 1 GB
mem_size: 512 MB
num_cpus: 2
properties:
image: cirros-0.3.4-x86_64-uec
mgmt_driver: noop
availability_zone: nova
metadata: {metering.vnf: SG1}
CP1:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1
VL1:
type: tosca.nodes.nfv.VL
properties:
network_name: net_mgmt
vendor: Tacker
policies:
- SP1:
type: tosca.policies.tacker.Scaling
properties:
increment: 1
cooldown: 60
min_instances: 1
max_instances: 3
default_instances: 2
targets: [VDU1]
- vdu_cpu_usage_monitoring_policy:
type: tosca.policies.tacker.Alarming
triggers:
vdu_hcpu_usage_scaling_out:
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
metrics: cpu_util
condition:
threshold: 50
constraint: utilization greater_than 50%
period: 600
evaluations: 1
method: avg
comparison_operator: gt
metadata: SG1
actions: [SP1]
vdu_hcpu_usage_scaling_in:
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
metrics: cpu_util
condition:
threshold: 10
constraint: utilization less_than 10%
period: 600
evaluations: 1
method: avg
comparison_operator: lt
metadata: SG1
actions: [SP1]

View File

@ -0,0 +1,150 @@
#
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
import time
from tacker.plugins.common import constants as evt_constants
from tacker.tests import constants
from tacker.tests.functional import base
from tacker.tests.utils import read_file
import yaml
class VnfTestAlarmMonitor(base.BaseTackerTest):
def _test_vnf_tosca_alarm(self, vnfd_file, vnf_name):
vnf_trigger_path = '/vnfs/%s/triggers'
data = dict()
data['tosca'] = read_file(vnfd_file)
tosca_dict = yaml.safe_load(data['tosca'])
toscal = data['tosca']
tosca_arg = {'vnfd': {'name': vnf_name,
'attributes': {'vnfd': toscal}}}
# Create vnfd with tosca template
vnfd_instance = self.client.create_vnfd(body=tosca_arg)
self.assertIsNotNone(vnfd_instance)
# Create vnf with vnfd_id
vnfd_id = vnfd_instance['vnfd']['id']
vnf_arg = {'vnf': {'vnfd_id': vnfd_id, 'name': vnf_name}}
vnf_instance = self.client.create_vnf(body=vnf_arg)
self.validate_vnf_instance(vnfd_instance, vnf_instance)
vnf_id = vnf_instance['vnf']['id']
def _waiting_time(count):
self.wait_until_vnf_active(
vnf_id,
constants.VNF_CIRROS_CREATE_TIMEOUT,
constants.ACTIVE_SLEEP_TIME)
vnf = self.client.show_vnf(vnf_id)['vnf']
# {"VDU1": ["10.0.0.14", "10.0.0.5"]}
self.assertEqual(count, len(json.loads(vnf['mgmt_url'])['VDU1']))
def trigger_vnf(vnf, policy_name, policy_action):
credential = 'g0jtsxu9'
body = {"trigger": {'policy_name': policy_name,
'action_name': policy_action,
'params': {
'data': {'alarm_id': '35a80852-e24f-46ed-bd34-e2f831d00172', 'current': 'alarm'}, # noqa
'credential': credential}
}
}
self.client.post(vnf_trigger_path % vnf, body)
def _inject_monitoring_policy(vnfd_dict):
if vnfd_dict.get('tosca_definitions_version'):
polices = vnfd_dict['topology_template'].get('policies', [])
mon_policy = dict()
for policy_dict in polices:
for name, policy in policy_dict.items():
if policy['type'] == constants.POLICY_ALARMING:
triggers = policy['triggers']
for trigger_name, trigger_dict in triggers.items():
policy_action_list = trigger_dict['actions']
for policy_action in policy_action_list:
mon_policy[trigger_name] = policy_action
return mon_policy
def verify_policy(policy_dict, kw_policy):
for name, action in policy_dict.items():
if kw_policy in name:
return name
# trigger alarm
monitoring_policy = _inject_monitoring_policy(tosca_dict)
for mon_policy_name, mon_policy_action in monitoring_policy.items():
if mon_policy_action in constants.DEFAULT_ALARM_ACTIONS:
self.wait_until_vnf_active(
vnf_id,
constants.VNF_CIRROS_CREATE_TIMEOUT,
constants.ACTIVE_SLEEP_TIME)
trigger_vnf(vnf_id, mon_policy_name, mon_policy_action)
else:
if 'scaling_out' in mon_policy_name:
_waiting_time(2)
time.sleep(constants.SCALE_WINDOW_SLEEP_TIME)
# scaling-out backend action
scaling_out_action = mon_policy_action + '-out'
trigger_vnf(vnf_id, mon_policy_name, scaling_out_action)
_waiting_time(3)
scaling_in_name = verify_policy(monitoring_policy,
kw_policy='scaling_in')
if scaling_in_name:
time.sleep(constants.SCALE_WINDOW_SLEEP_TIME)
# scaling-in backend action
scaling_in_action = mon_policy_action + '-in'
trigger_vnf(vnf_id, scaling_in_name, scaling_in_action)
_waiting_time(2)
self.verify_vnf_crud_events(
vnf_id, evt_constants.RES_EVT_SCALE,
evt_constants.ACTIVE, cnt=2)
self.verify_vnf_crud_events(
vnf_id, evt_constants.RES_EVT_SCALE,
evt_constants.PENDING_SCALE_OUT, cnt=1)
self.verify_vnf_crud_events(
vnf_id, evt_constants.RES_EVT_SCALE,
evt_constants.PENDING_SCALE_IN, cnt=1)
# Delete vnf_instance with vnf_id
try:
self.client.delete_vnf(vnf_id)
except Exception:
assert False, ("Failed to delete vnf %s after the monitor test" %
vnf_id)
# Verify VNF monitor events captured for states, ACTIVE and DEAD
vnf_state_list = [evt_constants.ACTIVE, evt_constants.DEAD]
self.verify_vnf_monitor_events(vnf_id, vnf_state_list)
# Delete vnfd_instance
self.addCleanup(self.client.delete_vnfd, vnfd_id)
self.addCleanup(self.wait_until_vnf_delete, vnf_id,
constants.VNF_CIRROS_DELETE_TIMEOUT)
def test_vnf_alarm_respawn(self):
self._test_vnf_tosca_alarm(
'sample-tosca-alarm-respawn.yaml',
'alarm and respawn vnf')
def test_vnf_alarm_scale(self):
self._test_vnf_tosca_alarm(
'sample-tosca-alarm-scale.yaml',
'alarm and scale vnf')

View File

@ -250,8 +250,7 @@ class TestDeviceHeat(base.TestCase):
tosca_tpl_name,
hot_tpl_name,
param_values='',
is_monitor=True,
is_alarm=False):
is_monitor=True):
tosca_tpl = _get_template(tosca_tpl_name)
exp_tmpl = self._get_expected_vnfd(tosca_tpl)
tosca_hw_dict = yaml.safe_load(_get_template(hot_tpl_name))
@ -279,11 +278,9 @@ class TestDeviceHeat(base.TestCase):
'"respawn"}, "parameters": {"count": 3, '
'"interval": 10}, "monitoring_params": '
'{"count": 3, "interval": 10}}}}}'})
if is_alarm:
dvc['attributes'].update({'alarm_url': ''})
return dvc
def _get_dummy_tosca_vnf(self, template, input_params='', is_alarm=False):
def _get_dummy_tosca_vnf(self, template, input_params=''):
tosca_template = _get_template(template)
vnf = utils.get_dummy_device_obj()
@ -292,24 +289,20 @@ class TestDeviceHeat(base.TestCase):
vnf['vnfd'] = dtemplate
vnf['attributes'] = {}
vnf['attributes']['param_values'] = input_params
if is_alarm:
vnf['attributes']['alarm_url'] = ''
return vnf
def _test_assert_equal_for_tosca_templates(self, tosca_tpl_name,
hot_tpl_name,
input_params='',
files=None,
is_monitor=True,
is_alarm=False):
vnf = self._get_dummy_tosca_vnf(tosca_tpl_name, input_params, is_alarm)
is_monitor=True):
vnf = self._get_dummy_tosca_vnf(tosca_tpl_name, input_params)
expected_result = '4a4c2d44-8a52-4895-9a75-9d1c76c3e738'
expected_fields = self._get_expected_fields_tosca(hot_tpl_name)
expected_vnf = self._get_expected_tosca_vnf(tosca_tpl_name,
hot_tpl_name,
input_params,
is_monitor,
is_alarm)
is_monitor)
result = self.heat_driver.create(plugin=None, context=self.context,
vnf=vnf,
auth_attr=utils.get_vim_auth_obj())
@ -459,8 +452,7 @@ class TestDeviceHeat(base.TestCase):
def test_create_tosca_with_alarm_monitoring(self):
self._test_assert_equal_for_tosca_templates(
'tosca_alarm.yaml',
'hot_tosca_alarm.yaml',
is_monitor=False,
is_alarm=True
'tosca_alarm_respawn.yaml',
'hot_tosca_alarm_respawn.yaml',
is_monitor=False
)

View File

@ -0,0 +1,24 @@
heat_template_version: 2013-05-23
description: 'sample-tosca-vnfd-scaling
'
outputs:
mgmt_ip-VDU1:
value:
get_attr: [CP1, fixed_ips, 0, ip_address]
parameters: {}
resources:
CP1:
properties: {network: net_mgmt, port_security_enabled: false}
type: OS::Neutron::Port
VDU1:
properties:
availability_zone: nova
config_drive: false
flavor: m1.tiny
image: cirros-0.3.4-x86_64-uec
metadata: {metering.vnf: SG1}
networks:
- port: {get_resource: CP1}
user_data_format: SOFTWARE_CONFIG
type: OS::Nova::Server

View File

@ -0,0 +1,41 @@
heat_template_version: 2013-05-23
description: 'An exception will be raised when having the mismatched metadata
(metadata is described in monitoring policy but unavailable in VDU properties).
'
outputs:
mgmt_ip-VDU1:
value:
get_attr: [CP1, fixed_ips, 0, ip_address]
parameters: {}
resources:
VDU1:
properties:
availability_zone: nova
config_drive: false
flavor: {get_resource: VDU1_flavor}
image: cirros-0.3.4-x86_64-uec
networks:
- port: {get_resource: CP1}
user_data_format: SOFTWARE_CONFIG
type: OS::Nova::Server
CP1:
properties: {network: net_mgmt, port_security_enabled: false}
type: OS::Neutron::Port
VDU1_flavor:
type: OS::Nova::Flavor
properties:
disk: 1
ram: 512
vcpus: 2
vdu_hcpu_usage_respawning:
type: OS::Aodh::Alarm
properties:
description: utilization greater_than 50%
meter_name: cpu_util
threshold: 50
period: 60
statistic: average
evaluation_periods: 1
comparison_operator: gt
'matching_metadata': {'metadata.user_metadata.vnf': 'VDU1'}

View File

@ -18,6 +18,7 @@ resources:
networks:
- port: {get_resource: CP1}
user_data_format: SOFTWARE_CONFIG
metadata: {'metering.vnf': 'VDU1'}
type: OS::Nova::Server
CP1:
properties: {network: net_mgmt, port_security_enabled: false}
@ -28,7 +29,7 @@ resources:
disk: 1
ram: 512
vcpus: 2
vdu1_cpu_usage_monitoring_policy:
vdu_hcpu_usage_respawning:
type: OS::Aodh::Alarm
properties:
description: utilization greater_than 50%
@ -38,4 +39,4 @@ resources:
statistic: average
evaluation_periods: 1
comparison_operator: gt
'matching_metadata': {'metadata.user_metadata.vnf': 'VDU1'}

View File

@ -0,0 +1,49 @@
heat_template_version: 2013-05-23
description: Tacker scaling template
parameters: {}
resources:
G1:
properties:
cooldown: 60
desired_capacity: 2
max_size: 3
min_size: 1
resource: {type: scaling.yaml}
type: OS::Heat::AutoScalingGroup
SP1_scale_in:
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: {get_resource: G1}
cooldown: 60
scaling_adjustment: '-1'
type: OS::Heat::ScalingPolicy
SP1_scale_out:
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: {get_resource: G1}
cooldown: 60
scaling_adjustment: 1
type: OS::Heat::ScalingPolicy
vdu_hcpu_usage_scaling_out:
type: OS::Aodh::Alarm
properties:
description: utilization greater_than 50%
meter_name: cpu_util
statistic: avg
period: 600
evaluation_periods: 1
threshold: 50
matching_metadata: {'metadata.user_metadata.vnf': SG1}
comparison_operator: gt
vdu_lcpu_usage_scaling_in:
type: OS::Aodh::Alarm
properties:
description: utilization less_than 10%
meter_name: cpu_util
statistic: avg
period: 600
evaluation_periods: 1
threshold: 10
matching_metadata: {'metadata.user_metadata.vnf': SG1}
comparison_operator: lt

View File

@ -18,6 +18,7 @@ topology_template:
image: cirros-0.3.4-x86_64-uec
mgmt_driver: noop
availability_zone: nova
metadata: {metering.vnf: VDU1}
CP1:
type: tosca.nodes.nfv.CP.Tacker
@ -40,7 +41,7 @@ topology_template:
- vdu1_cpu_usage_monitoring_policy:
type: tosca.policies.tacker.Alarming
triggers:
resize_compute:
vdu_hcpu_usage_respawning:
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
@ -52,6 +53,5 @@ topology_template:
evaluations: 1
method: avg
comparison_operator: gt
action:
resize_compute:
action_name: respawn
metadata: VDU1
actions: [respawn]

View File

@ -18,6 +18,7 @@ topology_template:
image: cirros-0.3.4-x86_64-uec
mgmt_driver: noop
availability_zone: nova
metadata: {metering.vnf: SG1}
CP1:
type: tosca.nodes.nfv.CP.Tacker
@ -50,7 +51,7 @@ topology_template:
- vdu1_cpu_usage_monitoring_policy:
type: tosca.policies.tacker.Alarming
triggers:
resize_compute:
vdu_hcpu_usage_scaling_out:
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
@ -62,6 +63,5 @@ topology_template:
evaluations: 1
method: avg
comparison_operator: gt
action:
resize_compute:
action_name: SP1
metadata: SG1
actions: [SP1]

View File

@ -0,0 +1,60 @@
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: >
An exception will be raised when having the mismatched metadata
(metadata is described in monitoring policy but unavailable in
VDU properties).
metadata:
template_name: sample-tosca-vnfd
topology_template:
node_templates:
VDU1:
type: tosca.nodes.nfv.VDU.Tacker
capabilities:
nfv_compute:
properties:
disk_size: 1 GB
mem_size: 512 MB
num_cpus: 2
properties:
image: cirros-0.3.4-x86_64-uec
mgmt_driver: noop
availability_zone: nova
CP1:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1
VL1:
type: tosca.nodes.nfv.VL
properties:
network_name: net_mgmt
vendor: Tacker
policies:
- vdu1_cpu_usage_monitoring_policy:
type: tosca.policies.tacker.Alarming
triggers:
vdu_hcpu_usage_respawning:
event_type:
type: tosca.events.resource.utilization
implementation: Ceilometer
metrics: cpu_util
condition:
threshold: 50
constraint: utilization greater_than 50%
period: 60
evaluations: 1
method: average
comparison_operator: gt
metadata: VDU1
actions: ''

View File

@ -18,6 +18,8 @@ topology_template:
image: cirros-0.3.4-x86_64-uec
mgmt_driver: noop
availability_zone: nova
metadata: {metering.vnf: VDU1}
CP1:
type: tosca.nodes.nfv.CP.Tacker
@ -40,7 +42,7 @@ topology_template:
- vdu1_cpu_usage_monitoring_policy:
type: tosca.policies.tacker.Alarming
triggers:
resize_compute:
vdu_hcpu_usage_respawning:
event_type:
type: tosca.events.resource.utilization
implementation: Ceilometer
@ -52,5 +54,5 @@ topology_template:
evaluations: 1
method: average
comparison_operator: gt
action:
resize_compute: ''
metadata: VDU1
actions: ''

View File

@ -0,0 +1,78 @@
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: sample-tosca-vnfd-scaling
metadata:
template_name: sample-tosca-vnfd-scaling
topology_template:
node_templates:
VDU1:
type: tosca.nodes.nfv.VDU.Tacker
properties:
image: cirros-0.3.4-x86_64-uec
mgmt_driver: noop
availability_zone: nova
flavor: m1.tiny
metadata: {metering.vnf: SG1}
CP1:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1
VL1:
type: tosca.nodes.nfv.VL
properties:
network_name: net_mgmt
vendor: Tacker
policies:
- SP1:
type: tosca.policies.tacker.Scaling
properties:
increment: 1
cooldown: 60
min_instances: 1
max_instances: 3
default_instances: 2
targets: [VDU1]
- vdu_cpu_usage_monitoring_policy:
type: tosca.policies.tacker.Alarming
triggers:
vdu_hcpu_usage_scaling_out:
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
metrics: cpu_util
condition:
threshold: 50
constraint: utilization greater_than 50%
period: 600
evaluations: 1
method: avg
comparison_operator: gt
metadata: SG1
actions: [SP1]
vdu_lcpu_usage_scaling_in:
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
metrics: cpu_util
condition:
threshold: 10
constraint: utilization less_than 10%
period: 600
evaluations: 1
method: avg
comparison_operator: lt
metadata: SG1
actions: [SP1]

View File

@ -457,3 +457,26 @@ class TestOpenStack(base.TestCase):
'test_tosca_mac_ip.yaml',
'hot_tosca_mac_ip.yaml'
)
def test_create_tosca_alarm_respawn(self):
self._test_assert_equal_for_tosca_templates(
'tosca_alarm_respawn.yaml',
'hot_tosca_alarm_respawn.yaml',
is_monitor=False
)
def test_create_tosca_alarm_scale(self):
self._test_assert_equal_for_tosca_templates(
'tosca_alarm_scale.yaml',
'hot_tosca_alarm_scale.yaml',
files={'scaling.yaml': 'hot_alarm_scale_custom.yaml'},
is_monitor=False
)
def test_create_tosca_with_alarm_monitoring_not_matched(self):
self.assertRaises(vnfm.MetadataNotMatched,
self._test_assert_equal_for_tosca_templates,
'tosca_alarm_metadata.yaml',
'hot_tosca_alarm_metadata.yaml',
is_monitor=False
)

View File

@ -394,17 +394,17 @@ class TestVNFMPlugin(db_base.SqlTestCase):
dummy_vnf['vim_id'] = '437ac8ef-a8fb-4b6e-8d8a-a5e86a376e8b'
return dummy_vnf
def _test_create_vnf_trigger(self, action_value):
def _test_create_vnf_trigger(self, policy_name, action_value):
vnf_id = "6261579e-d6f3-49ad-8bc3-a9cb974778fe"
trigger_request = {"trigger": {"action_name": action_value, "params": {
"credential": "026kll6n", "data": {"current": "alarm",
'alarm_id':
"b7fa9ffd-0a4f-4165-954b-5a8d0672a35f"}},
"policy_name": "vdu1_cpu_usage_monitoring_policy"}}
"policy_name": policy_name}}
expected_result = {"action_name": action_value, "params": {
"credential": "026kll6n", "data": {"current": "alarm",
"alarm_id": "b7fa9ffd-0a4f-4165-954b-5a8d0672a35f"}},
"policy_name": "vdu1_cpu_usage_monitoring_policy"}
"policy_name": policy_name}
self._vnf_alarm_monitor.process_alarm_for_vnf.return_value = True
trigger_result = self.vnfm_plugin.create_vnf_trigger(
self.context, vnf_id, trigger_request)
@ -418,7 +418,8 @@ class TestVNFMPlugin(db_base.SqlTestCase):
mock_get_vnf.return_value = dummy_vnf
mock_action_class = mock.Mock()
mock_get_policy.return_value = mock_action_class
self._test_create_vnf_trigger(action_value="respawn")
self._test_create_vnf_trigger(policy_name="vdu_hcpu_usage_respawning",
action_value="respawn")
mock_get_policy.assert_called_once_with('respawn', 'test_vim')
mock_action_class.execute_action.assert_called_once_with(
self.vnfm_plugin, dummy_vnf)
@ -432,7 +433,8 @@ class TestVNFMPlugin(db_base.SqlTestCase):
mock_action_class = mock.Mock()
mock_get_policy.return_value = mock_action_class
scale_body = {'scale': {'policy': 'SP1', 'type': 'out'}}
self._test_create_vnf_trigger(action_value="SP1")
self._test_create_vnf_trigger(policy_name="vdu_hcpu_usage_scaling_out",
action_value="SP1-out")
mock_get_policy.assert_called_once_with('scaling', 'test_vim')
mock_action_class.execute_action.assert_called_once_with(
self.vnfm_plugin, dummy_vnf, scale_body)

View File

@ -67,7 +67,7 @@ class TestSamples(testtools.TestCase):
self.assertIsNotNone(
tosca,
"Tosca parser failed to parse %s" % f)
utils.post_process_template(tosca)
hot = None
try:
hot = tosca_translator.TOSCATranslator(tosca,

View File

@ -84,7 +84,7 @@ class TOSCAToHOT(object):
self.fields['template'] = self.heat_template_yaml
if is_tosca_format:
self._handle_scaling(vnfd_dict)
self._handle_policies(vnfd_dict)
if self.monitoring_dict:
self.vnf['attributes']['monitoring_policy'] = jsonutils.dumps(
self.monitoring_dict)
@ -124,30 +124,23 @@ class TOSCAToHOT(object):
return dev_attrs
@log.log
def _handle_scaling(self, vnfd_dict):
def _handle_policies(self, vnfd_dict):
vnf = self.vnf
(is_scaling_needed, scaling_group_names,
main_dict) = self._generate_hot_scaling(
vnfd_dict['topology_template'], 'scaling.yaml')
(is_enabled_alarm, alarm_resource, sub_heat_tpl_yaml) =\
(is_enabled_alarm, alarm_resource, heat_tpl_yaml) =\
self._generate_hot_alarm_resource(vnfd_dict['topology_template'])
if is_enabled_alarm and not is_scaling_needed:
self.fields['template'] = self.heat_template_yaml
self.fields['template'] = heat_tpl_yaml
if is_scaling_needed:
if is_enabled_alarm:
main_dict['resources'].update(alarm_resource)
main_yaml = yaml.dump(main_dict)
self.fields['template'] = main_yaml
if is_enabled_alarm:
self.fields['files'] = {
'scaling.yaml': sub_heat_tpl_yaml
}
else:
self.fields['files'] = {
'scaling.yaml': self.heat_template_yaml
}
self.fields['files'] = {'scaling.yaml': self.heat_template_yaml}
vnf['attributes']['heat_template'] = main_yaml
# TODO(kanagaraj-manickam) when multiple groups are
# supported, make this scaling atribute as
@ -309,6 +302,7 @@ class TOSCAToHOT(object):
LOG.debug("tosca-parser error: %s", str(e))
raise vnfm.ToscaParserFailed(error_msg_details=str(e))
metadata = toscautils.get_vdu_metadata(tosca)
monitoring_dict = toscautils.get_vdu_monitoring(tosca)
mgmt_ports = toscautils.get_mgmt_ports(tosca)
res_tpl = toscautils.get_resources_dict(tosca,
@ -322,10 +316,12 @@ class TOSCAToHOT(object):
LOG.debug("heat-translator error: %s", str(e))
raise vnfm.HeatTranslatorFailed(error_msg_details=str(e))
heat_template_yaml = toscautils.post_process_heat_template(
heat_template_yaml, mgmt_ports, res_tpl, self.unsupported_props)
heat_template_yaml, mgmt_ports, metadata,
res_tpl, self.unsupported_props)
self.heat_template_yaml = heat_template_yaml
self.monitoring_dict = monitoring_dict
self.metadata = metadata
@log.log
def _generate_hot_from_legacy(self, vnfd_dict, dev_attrs):
@ -514,56 +510,54 @@ class TOSCAToHOT(object):
alarm_resource = dict()
heat_tpl = self.heat_template_yaml
heat_dict = yamlparser.simple_ordered_parse(heat_tpl)
sub_heat_dict = copy.deepcopy(heat_dict)
is_enabled_alarm = False
if 'policies' in topology_tpl_dict:
for policy_dict in topology_tpl_dict['policies']:
name, policy_tpl_dict = list(policy_dict.items())[0]
if policy_tpl_dict['type'] == ALARMING_POLICY:
# need to parse triggers here: scaling in/out, respawn,...
if policy_tpl_dict['type'] == \
'tosca.policies.tacker.Alarming':
is_enabled_alarm = True
alarm_resource[name], sub_heat_dict =\
self._convert_to_heat_monitoring_resource(policy_dict,
self.vnf, sub_heat_dict, topology_tpl_dict)
triggers = policy_tpl_dict['triggers']
for trigger_name, trigger_dict in triggers.items():
alarm_resource[trigger_name] =\
self._convert_to_heat_monitoring_resource({
trigger_name: trigger_dict}, self.vnf)
heat_dict['resources'].update(alarm_resource)
break
self.heat_template_yaml = yaml.dump(heat_dict)
sub_heat_tpl_yaml = yaml.dump(sub_heat_dict)
return (is_enabled_alarm, alarm_resource, sub_heat_tpl_yaml)
heat_tpl_yaml = yaml.dump(heat_dict)
return (is_enabled_alarm,
alarm_resource,
heat_tpl_yaml
)
@log.log
def _convert_to_heat_monitoring_resource(self, mon_policy, vnf,
sub_heat_dict, topology_tpl_dict):
def _convert_to_heat_monitoring_resource(self, mon_policy, vnf):
mon_policy_hot = {'type': 'OS::Aodh::Alarm'}
mon_policy_hot['properties'] = self._convert_to_heat_monitoring_prop(
mon_policy, vnf)
mon_policy_hot['properties'] = \
self._convert_to_heat_monitoring_prop(mon_policy, vnf)
return mon_policy_hot
if 'policies' in topology_tpl_dict:
for policies in topology_tpl_dict['policies']:
policy_name, policy_dt = list(policies.items())[0]
if policy_dt['type'] == SCALING_POLICY:
# Fixed metadata. it will be fixed
# once targets are supported
metadata_dict = dict()
metadata_dict['metering.vnf_id'] = vnf['id']
sub_heat_dict['resources']['VDU1']['properties']['metadata'] =\
metadata_dict
matching_metadata_dict = dict()
matching_metadata_dict['metadata.user_metadata.vnf_id'] =\
vnf['id']
mon_policy_hot['properties']['matching_metadata'] =\
matching_metadata_dict
break
return mon_policy_hot, sub_heat_dict
@log.log
def _convert_to_heat_monitoring_prop(self, mon_policy, vnf):
name, mon_policy_dict = list(mon_policy.items())[0]
tpl_trigger_name = mon_policy_dict['triggers']['resize_compute']
tpl_condition = tpl_trigger_name['condition']
properties = {}
properties['meter_name'] = tpl_trigger_name['metrics']
metadata = self.metadata
trigger_name, trigger_dict = list(mon_policy.items())[0]
tpl_condition = trigger_dict['condition']
properties = dict()
if not (trigger_dict.get('metadata') and metadata):
raise vnfm.MetadataNotMatched()
matching_metadata_dict = dict()
properties['meter_name'] = trigger_dict['metrics']
is_matched = False
for vdu_name, metadata_dict in metadata['vdus'].items():
if trigger_dict['metadata'] ==\
metadata_dict['metering.vnf']:
is_matched = True
if not is_matched:
raise vnfm.MetadataNotMatched()
matching_metadata_dict['metadata.user_metadata.vnf'] =\
trigger_dict['metadata']
properties['matching_metadata'] = \
matching_metadata_dict
properties['comparison_operator'] = \
tpl_condition['comparison_operator']
properties['period'] = tpl_condition['period']
@ -572,8 +566,9 @@ class TOSCAToHOT(object):
properties['description'] = tpl_condition['constraint']
properties['threshold'] = tpl_condition['threshold']
# alarm url process here
alarm_url = str(vnf['attributes'].get('alarm_url'))
alarm_url = vnf['attributes'].get(trigger_name)
if alarm_url:
alarm_url = str(alarm_url)
LOG.debug('Alarm url in heat %s', alarm_url)
properties['alarm_actions'] = [alarm_url]
return properties

View File

@ -215,43 +215,59 @@ class VNFAlarmMonitor(object):
'tacker.tacker.alarm_monitor.drivers',
cfg.CONF.tacker.alarm_monitor_driver)
def update_vnf_with_alarm(self, vnf, policy_name, policy_dict):
def update_vnf_with_alarm(self, plugin, context, vnf, policy_dict):
triggers = policy_dict['triggers']
alarm_url = dict()
for trigger_name, trigger_dict in triggers.items():
params = dict()
params['vnf_id'] = vnf['id']
params['mon_policy_name'] = policy_name
driver = policy_dict['triggers']['resize_compute'][
'event_type']['implementation']
policy_action = policy_dict['triggers']['resize_compute'].get('action')
if not policy_action:
params['mon_policy_name'] = trigger_name
driver = trigger_dict['event_type']['implementation']
policy_action_list = trigger_dict.get('actions')
if len(policy_action_list) == 0:
_log_monitor_events(t_context.get_admin_context(),
vnf,
"Alarm not set: policy action missing")
return
alarm_action_name = policy_action['resize_compute'].get('action_name')
if not alarm_action_name:
_log_monitor_events(t_context.get_admin_context(),
vnf,
"Alarm not set: alarm action name missing")
return
params['mon_policy_action'] = alarm_action_name
alarm_url = self.call_alarm_url(driver, vnf, params)
# Other backend policies with the construct (policy, action)
# ex: (SP1, in), (SP1, out)
def _refactor_backend_policy(bk_policy_name, bk_action_name):
policy = '%(policy_name)s-%(action_name)s' % {
'policy_name': bk_policy_name,
'action_name': bk_action_name}
return policy
for policy_action in policy_action_list:
filters = {'name': policy_action}
bkend_policies =\
plugin.get_vnf_policies(context, vnf['id'], filters)
if bkend_policies:
bkend_policy = bkend_policies[0]
if bkend_policy['type'] == constants.POLICY_SCALING:
cp = trigger_dict['condition'].\
get('comparison_operator')
scaling_type = 'out' if cp == 'gt' else 'in'
policy_action = _refactor_backend_policy(policy_action,
scaling_type)
params['mon_policy_action'] = policy_action
alarm_url[trigger_name] =\
self.call_alarm_url(driver, vnf, params)
details = "Alarm URL set successfully: %s" % alarm_url
_log_monitor_events(t_context.get_admin_context(),
vnf,
details)
return alarm_url
# vnf['attribute']['alarm_url'] = alarm_url ---> create
# by plugin or vm_db
def process_alarm_for_vnf(self, policy):
def process_alarm_for_vnf(self, vnf, trigger):
'''call in plugin'''
vnf = policy['vnf']
params = policy['params']
mon_prop = policy['properties']
params = trigger['params']
mon_prop = trigger['trigger']
alarm_dict = dict()
alarm_dict['alarm_id'] = params['data'].get('alarm_id')
alarm_dict['status'] = params['data'].get('current')
driver = mon_prop['resize_compute']['event_type']['implementation']
trigger_name, trigger_dict = list(mon_prop.items())[0]
driver = trigger_dict['event_type']['implementation']
return self.process_alarm(driver, vnf, alarm_dict)
def _invoke(self, driver, **kwargs):
@ -382,9 +398,9 @@ class ActionRespawnHeat(ActionPolicy):
plugin.add_vnf_to_monitor(updated_vnf, vim_res['vim_type'])
LOG.debug(_("VNF %s added to monitor thread"), updated_vnf[
'id'])
if vnf_dict['attributes'].get('alarm_url'):
if vnf_dict['attributes'].get('alarming_policy'):
_delete_heat_stack(vim_res['vim_auth'])
vnf_dict['attributes'].pop('alarm_url')
vnf_dict['attributes'].pop('alarming_policy')
_respin_vnf()

View File

@ -249,17 +249,19 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
LOG.debug('hosting_vnf: %s', hosting_vnf)
self._vnf_monitor.add_hosting_vnf(hosting_vnf)
def add_alarm_url_to_vnf(self, vnf_dict):
def add_alarm_url_to_vnf(self, context, vnf_dict):
vnfd_yaml = vnf_dict['vnfd']['attributes'].get('vnfd', '')
vnfd_dict = yaml.load(vnfd_yaml)
if vnfd_dict and vnfd_dict.get('tosca_definitions_version'):
polices = vnfd_dict['topology_template'].get('policies', [])
for policy_dict in polices:
name, policy = policy_dict.items()[0]
name, policy = list(policy_dict.items())[0]
if policy['type'] in constants.POLICY_ALARMING:
alarm_url = self._vnf_alarm_monitor.update_vnf_with_alarm(
vnf_dict, name, policy)
vnf_dict['attributes']['alarm_url'] = alarm_url
alarm_url =\
self._vnf_alarm_monitor.update_vnf_with_alarm(
self, context, vnf_dict, policy)
vnf_dict['attributes']['alarming_policy'] = vnf_dict['id']
vnf_dict['attributes'].update(alarm_url)
break
def config_vnf(self, context, vnf_dict):
@ -343,7 +345,7 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
vnf_id = vnf_dict['id']
LOG.debug(_('vnf_dict %s'), vnf_dict)
self.mgmt_create_pre(context, vnf_dict)
self.add_alarm_url_to_vnf(vnf_dict)
self.add_alarm_url_to_vnf(context, vnf_dict)
try:
instance_id = self._vnf_manager.invoke(
driver_name, 'create', plugin=self,
@ -683,9 +685,11 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
policy_list.append(p)
# Check for filters
if filters.get('name'):
if filters.get('name') or filters.get('type'):
if name == filters.get('name'):
_add(policy)
if policy['type'] == filters.get('type'):
_add(policy)
break
else:
continue
@ -714,38 +718,70 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
return scale['scale']
def _validate_alarming_policy(self, context, policy):
vnf_id = policy['vnf']['id']
# validate policy type
type = policy['type']
if type not in constants.POLICY_ALARMING:
raise exceptions.VnfPolicyTypeInvalid(
type=type,
valid_types=constants.POLICY_ALARMING,
policy=policy['id']
)
def get_vnf_policy_by_type(self, context, vnf_id, policy_type=None, fields=None): # noqa
policies = self.get_vnf_policies(context,
vnf_id,
filters={'type': policy_type})
if policies:
return policies[0]
raise exceptions.VnfPolicyTypeInvalid(type=constants.POLICY_ALARMING,
vnf_id=vnf_id)
def _validate_alarming_policy(self, context, vnf_id, trigger):
# validate alarm status
if not self._vnf_alarm_monitor.process_alarm_for_vnf(policy):
if not self._vnf_alarm_monitor.process_alarm_for_vnf(vnf_id, trigger):
raise exceptions.AlarmUrlInvalid(vnf_id=vnf_id)
# validate policy action
action = policy['action_name']
policy_ = None
action_ = None
# validate policy action. if action is composite, split it.
# ex: SP1-in, SP1-out
action = trigger['action_name']
sp_action = action.split('-')
if len(sp_action) == 2:
bk_policy_name = sp_action[0]
bk_policy_action = sp_action[1]
policies_ = self.get_vnf_policies(context, vnf_id,
filters={'name': bk_policy_name})
if policies_:
policy_ = policies_[0]
action_ = bk_policy_action
if not policy_:
if action not in constants.DEFAULT_ALARM_ACTIONS:
policy_ = self.get_vnf_policy(context, action, vnf_id)
if not policy_:
raise exceptions.VnfPolicyNotFound(
vnf_id=action,
policy=policy['id']
)
LOG.debug(_("Policy %s is validated successfully") % policy)
return policy_
LOG.debug(_("Trigger %s is validated successfully") % trigger)
return policy_, action_
# validate url
def _handle_vnf_monitoring(self, context, policy):
vnf_dict = policy['vnf']
if policy['action_name'] in constants.DEFAULT_ALARM_ACTIONS:
action = policy['action_name']
def _get_vnf_triggers(self, context, vnf_id, filters=None, fields=None):
policy = self.get_vnf_policy_by_type(
context, vnf_id, policy_type=constants.POLICY_ALARMING)
triggers = policy['properties']
vnf_trigger = dict()
for trigger_name, trigger_dict in triggers.items():
if trigger_name == filters.get('name'):
vnf_trigger['trigger'] = {trigger_name: trigger_dict}
vnf_trigger['vnf'] = policy['vnf']
break
return vnf_trigger
def get_vnf_trigger(self, context, vnf_id, trigger_name):
trigger = self._get_vnf_triggers(
context, vnf_id, filters={'name': trigger_name})
if not trigger:
raise exceptions.TriggerNotFound(
trigger_name=trigger_name,
vnf_id=vnf_id
)
return trigger
def _handle_vnf_monitoring(self, context, trigger):
vnf_dict = trigger['vnf']
if trigger['action_name'] in constants.DEFAULT_ALARM_ACTIONS:
action = trigger['action_name']
LOG.debug(_('vnf for monitoring: %s'), vnf_dict)
infra_driver, vim_auth = self._get_infra_driver(context, vnf_dict)
action_cls = monitor.ActionPolicy.get_policy(action,
@ -753,11 +789,9 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
if action_cls:
action_cls.execute_action(self, vnf_dict)
if policy.get('bckend_policy'):
bckend_policy = policy['bckend_policy']
if trigger.get('bckend_policy'):
bckend_policy = trigger['bckend_policy']
bckend_policy_type = bckend_policy['type']
cp = policy['properties']['resize_compute']['condition'].\
get('comparison_operator')
if bckend_policy_type == constants.POLICY_SCALING:
if vnf_dict['status'] != constants.ACTIVE:
LOG.info(context, vnf_dict,
@ -766,7 +800,7 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
action = 'scaling'
scale = {}
scale.setdefault('scale', {})
scale['scale']['type'] = 'out' if cp == 'gt' else 'in'
scale['scale']['type'] = trigger['bckend_action']
scale['scale']['policy'] = bckend_policy['name']
infra_driver, vim_auth = self._get_infra_driver(context,
vnf_dict)
@ -777,22 +811,16 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
def create_vnf_trigger(
self, context, vnf_id, trigger):
# Verified API: pending
# Need to use: _make_policy_dict, get_vnf_policies, get_vnf_policy
# action: scaling, refer to template to find specific scaling policy
# we can extend in future to support other policies
# Monitoring policy should be describe in heat_template_yaml.
# Create first
policy_ = self.get_vnf_policy(context,
trigger['trigger']['policy_name'],
vnf_id)
policy_.update({'action_name': trigger['trigger']['action_name']})
policy_.update({'params': trigger['trigger']['params']})
bk_policy = self._validate_alarming_policy(context, policy_)
trigger_ = self.get_vnf_trigger(
context, vnf_id, trigger['trigger']['policy_name'])
trigger_.update({'action_name': trigger['trigger']['action_name']})
trigger_.update({'params': trigger['trigger']['params']})
bk_policy, bk_action = self._validate_alarming_policy(
context, vnf_id, trigger_)
if bk_policy:
policy_.update({'bckend_policy': bk_policy})
self._handle_vnf_monitoring(context, policy_)
trigger_.update({'bckend_policy': bk_policy,
'bckend_action': bk_action})
self._handle_vnf_monitoring(context, trigger_)
return trigger['trigger']
def get_vnf_resources(self, context, vnf_id, fields=None, filters=None):

View File

@ -62,7 +62,7 @@ FLAVOR_EXTRA_SPECS_LIST = ('cpu_allocation',
delpropmap = {TACKERVDU: ('mgmt_driver', 'config', 'service_type',
'placement_policy', 'monitoring_policy',
'failure_policy'),
'metadata', 'failure_policy'),
TACKERCP: ('management',)}
convert_prop = {TACKERCP: {'anti_spoofing_protection':
@ -117,6 +117,19 @@ def get_vdu_monitoring(template):
return monitoring_dict
@log.log
def get_vdu_metadata(template):
metadata = dict()
metadata.setdefault('vdus', {})
for nt in template.nodetemplates:
if nt.type_definition.is_derived_from(TACKERVDU):
metadata_dict = nt.get_property_value('metadata') or None
if metadata_dict:
metadata['vdus'][nt.name] = {}
metadata['vdus'][nt.name].update(metadata_dict)
return metadata
@log.log
def get_mgmt_ports(tosca):
mgmt_ports = {}
@ -177,8 +190,8 @@ def convert_unsupported_res_prop(heat_dict, unsupported_res_prop):
@log.log
def post_process_heat_template(heat_tpl, mgmt_ports, res_tpl,
unsupported_res_prop=None):
def post_process_heat_template(heat_tpl, mgmt_ports, metadata,
res_tpl, unsupported_res_prop=None):
#
# TODO(bobh) - remove when heat-translator can support literal strings.
#
@ -200,6 +213,11 @@ def post_process_heat_template(heat_tpl, mgmt_ports, res_tpl,
else:
heat_dict['outputs'] = output
LOG.debug(_('Added output for %s'), outputname)
if metadata:
for vdu_name, metadata_dict in metadata['vdus'].items():
heat_dict['resources'][vdu_name]['properties']['metadata'] =\
metadata_dict
add_resources_tpl(heat_dict, res_tpl)
if unsupported_res_prop:
convert_unsupported_res_prop(heat_dict, unsupported_res_prop)