Move usage from deprecated Ceilometer API to Gnocchi API

This removes usage of deprecated Ceilometer API

1. Changing trigger type from "OS::Ceilometer::Alarm"
to "OS::Aodh::GnocchiAggregationByResourcesAlarm"
2. Add "resource_type" fixed with value "instance"
3. Change some paramters (meter_name-> metric,
statistic ->aggregation_method, period-> granularity)
4. Change value from "average" to "mean" in method to
compare to the threshold

Change-Id: I486c14cbc9d05a0e826bbef1ad181bdcb2d8c951
Closes-Bug: #1735484
This commit is contained in:
hoangphuocbk 2017-12-01 02:37:38 +09:00 committed by Cong Phuoc Hoang
parent 7a0efa8007
commit 0da9469017
25 changed files with 285 additions and 249 deletions

View File

@ -28,23 +28,25 @@ described firstly like other TOSCA templates in Tacker.
.. code-block:: yaml
policies:
- vdu1_cpu_usage_monitoring_policy:
type: tosca.policies.tacker.Alarming
triggers:
resize_compute:
policies:
- vdu1_cpu_usage_monitoring_policy:
type: tosca.policies.tacker.Alarming
triggers:
vdu_hcpu_usage_respawning:
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
metrics: cpu_util
metric: cpu_util
condition:
threshold: 50
constraint: utilization greater_than 50%
period: 65
granularity: 600
evaluations: 1
method: avg
aggregation_method: mean
resource_type: instance
comparison_operator: gt
actions: [respawn]
metadata: VDU1
action: [respawn]
Alarm framework already supported the some default backend actions like
**scaling, respawn, log, and log_and_kill**.
@ -77,7 +79,7 @@ in Tacker:
image: cirros-0.4.0-x86_64-disk
mgmt_driver: noop
availability_zone: nova
metadata: {metering.vnf: SG1}
metadata: {metering.server_group: SG1}
CP1:
type: tosca.nodes.nfv.CP.Tacker
@ -89,30 +91,6 @@ in Tacker:
node: VL1
- virtualBinding:
node: VDU1
VDU2:
type: tosca.nodes.nfv.VDU.Tacker
capabilities:
nfv_compute:
properties:
disk_size: 1 GB
mem_size: 512 MB
num_cpus: 2
properties:
image: cirros-0.4.0-x86_64-disk
mgmt_driver: noop
availability_zone: nova
metadata: {metering.vnf: SG1}
CP2:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU2
VL1:
type: tosca.nodes.nfv.VL
@ -123,13 +101,13 @@ in Tacker:
policies:
- SP1:
type: tosca.policies.tacker.Scaling
targets: [VDU1]
properties:
increment: 1
cooldown: 120
min_instances: 1
max_instances: 3
default_instances: 2
targets: [VDU1,VDU2]
default_instances: 1
- vdu_cpu_usage_monitoring_policy:
type: tosca.policies.tacker.Alarming
@ -138,32 +116,33 @@ in Tacker:
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
metrics: cpu_util
metric: cpu_util
condition:
threshold: 50
constraint: utilization greater_than 50%
period: 600
threshold: 80
constraint: utilization greater_than 80%
granularity: 300
evaluations: 1
method: avg
aggregation_method: mean
resource_type: instance
comparison_operator: gt
metadata: SG1
actions: [SP1]
action: [SP1]
vdu_lcpu_usage_scaling_in:
targets: [VDU1, VDU2]
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
metrics: cpu_util
metric: cpu_util
condition:
threshold: 10
constraint: utilization less_than 10%
period: 600
granularity: 300
evaluations: 1
method: avg
aggregation_method: mean
resource_type: instance
comparison_operator: lt
metadata: SG1
actions: [SP1]
action: [SP1]
**NOTE:**
@ -188,6 +167,22 @@ How to monitor VNFs via alarm triggers
How to setup alarm configuration
================================
Tacker provides templates that implemented Ceilometer as alarm for monitoring
VNFs, which are located in **tacker/samples/tosca-templates/vnfd**.
1. tosca-vnfd-alarm-multi-actions.yaml
2. tosca-vnfd-alarm-respawn.yaml
3. tosca-vnfd-alarm-scale.yaml
The following commands shows creating VNF with alarms for scaling in and out.
.. code-block:: console
$ cd ~/tacker/samples/tosca-templates/vnfd
$ openstack vnf create --vnfd-template tosca-vnfd-alarm-scale.yaml VNF1
Firstly, vnfd and vnf need to be created successfully using pre-defined TOSCA
template for alarm monitoring. Then, in order to know whether alarm
configuration defined in Tacker is successfully passed to Ceilometer,
@ -195,51 +190,53 @@ Tacker users could use CLI:
.. code-block:: console
$ aodh alarm list
$ openstack alarm list
+--------------------------------------+--------------------------------------------+-----------------------------------------------------------------------------------+-------------------+----------+---------+
| alarm_id | type | name | state | severity | enabled |
+--------------------------------------+--------------------------------------------+-----------------------------------------------------------------------------------+-------------------+----------+---------+
| f418ebf8-f8a6-4991-8f0d-938e38434411 | gnocchi_aggregation_by_resources_threshold | VNF1_7582cdf4-58ed-4df8-8fa2-c15938adf70b-vdu_hcpu_usage_scaling_out-4imzw3c7cicb | insufficient data | low | True |
| 70d86622-940a-4bc3-87c2-d5dfbb01bbea | gnocchi_aggregation_by_resources_threshold | VNF1_7582cdf4-58ed-4df8-8fa2-c15938adf70b-vdu_lcpu_usage_scaling_in-dwvdvbegiqdk | insufficient data | low | True |
+--------------------------------------+--------------------------------------------+-----------------------------------------------------------------------------------+-------------------+----------+---------+
+--------------------------------------+-----------+--------------------------------------------------------------------------------------------------------------------------------------+-------------------+----------+---------+
| alarm_id | type | name | state | severity | enabled |
+--------------------------------------+-----------+--------------------------------------------------------------------------------------------------------------------------------------+-------------------+----------+---------+
| 6f2336b9-e0a2-4e33-88be-bc036192b42b | threshold | tacker.vnfm.infra_drivers.openstack.openstack_OpenStack-a0f60b00-ad3d-4769-92ef-e8d9518da2c8-vdu_lcpu_scaling_in-smgctfnc3ql5 | insufficient data | low | True |
| e049f0d3-09a8-46c0-9b88-e61f1f524aab | threshold | tacker.vnfm.infra_drivers.openstack.openstack_OpenStack-a0f60b00-ad3d-4769-92ef-e8d9518da2c8-vdu_hcpu_usage_scaling_out-lubylov5g6xb | insufficient data | low | True |
+--------------------------------------+-----------+--------------------------------------------------------------------------------------------------------------------------------------+-------------------+----------+---------+
.. code-block:: console
$aodh alarm show 6f2336b9-e0a2-4e33-88be-bc036192b42b
+---------------------------+-------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+---------------------------+-------------------------------------------------------------------------------------------------------------------------------+
| alarm_actions | [u'http://pinedcn:9890/v1.0/vnfs/a0f60b00-ad3d-4769-92ef-e8d9518da2c8/vdu_lcpu_scaling_in/SP1-in/yl7kh5qd'] |
| alarm_id | 6f2336b9-e0a2-4e33-88be-bc036192b42b |
| comparison_operator | lt |
| description | utilization less_than 10% |
| enabled | True |
| evaluation_periods | 1 |
| exclude_outliers | False |
| insufficient_data_actions | None |
| meter_name | cpu_util |
| name | tacker.vnfm.infra_drivers.openstack.openstack_OpenStack-a0f60b00-ad3d-4769-92ef-e8d9518da2c8-vdu_lcpu_scaling_in-smgctfnc3ql5 |
| ok_actions | None |
| period | 600 |
| project_id | 3db801789c9e4b61b14ce448c9e7fb6d |
| query | metadata.user_metadata.vnf_id = a0f60b00-ad3d-4769-92ef-e8d9518da2c8 |
| repeat_actions | True |
| severity | low |
| state | insufficient data |
| state_timestamp | 2016-11-16T18:39:30.134954 |
| statistic | avg |
| threshold | 10.0 |
| time_constraints | [] |
| timestamp | 2016-11-16T18:39:30.134954 |
| type | threshold |
| user_id | a783e8a94768484fb9a43af03c6426cb |
+---------------------------+-------------------------------------------------------------------------------------------------------------------------------+
$ openstack alarm show 70d86622-940a-4bc3-87c2-d5dfbb01bbea
+---------------------------+------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+---------------------------+------------------------------------------------------------------------------------------------------------------+
| aggregation_method | mean |
| alarm_actions | [u'http://ubuntu:9890/v1.0/vnfs/7582cdf4-58ed-4df8-8fa2-c15938adf70b/vdu_lcpu_usage_scaling_in/SP1-in/v2fq7rd7'] |
| alarm_id | 70d86622-940a-4bc3-87c2-d5dfbb01bbea |
| comparison_operator | lt |
| description | utilization less_than 10% |
| enabled | True |
| evaluation_periods | 1 |
| granularity | 60 |
| insufficient_data_actions | [] |
| metric | cpu_util |
| name | VNF1_7582cdf4-58ed-4df8-8fa2-c15938adf70b-vdu_lcpu_usage_scaling_in-dwvdvbegiqdk |
| ok_actions | [] |
| project_id | b5e054a3861b4da2b084aca9530096be |
| query | {"=": {"server_group": "SG1-64beb5e4-c0"}} |
| repeat_actions | True |
| resource_type | instance |
| severity | low |
| state | insufficient data |
| state_reason | Not evaluated yet |
| state_timestamp | 2018-07-20T06:00:33.142762 |
| threshold | 10.0 |
| time_constraints | [] |
| timestamp | 2018-07-20T06:00:33.142762 |
| type | gnocchi_aggregation_by_resources_threshold |
| user_id | 61fb5c6193e549f3baee26bd508c0b29 |
+---------------------------+------------------------------------------------------------------------------------------------------------------+
How to trigger alarms:
======================
As shown in the above Ceilometer command, alarm state is shown as
"insufficient data". Alarm is triggered by Ceilometer once alarm
state changes to "alarm".
@ -252,9 +249,9 @@ in **/etc/ceilometer/pipeline.yaml** file and then restart Ceilometer service.
Another way could be used to check if backend action is handled well in Tacker:
.. code-block::ini
.. code-block:: console
curl -H "Content-Type: application/json" -X POST -d '{"alarm_id": "35a80852-e24f-46ed-bd34-e2f831d00172", "current": "alarm"}' http://pinedcn:9890/v1.0/vnfs/a0f60b00-ad3d-4769-92ef-e8d9518da2c8/vdu_lcpu_scaling_in/SP1-in/yl7kh5qd
curl -H "Content-Type: application/json" -X POST -d '{"alarm_id": "35a80852-e24f-46ed-bd34-e2f831d00172", "current": "alarm"}' http://ubuntu:9890/v1.0/vnfs/7582cdf4-58ed-4df8-8fa2-c15938adf70b/vdu_lcpu_usage_scaling_in/SP1-in/v2fq7rd7
Then, users can check Horizon to know if vnf is respawned. Please note
that the url used in the above command could be captured from

View File

@ -0,0 +1,11 @@
---
fixes:
- |
Removes usage of deprecated Ceilometer API:
1. Change trigger type from "OS::Ceilometer::Alarm" to
"OS::Aodh::GnocchiAggregationByResourcesAlarm"
2. Add "resource_type" fixed with value "instance"
3. Change some paramters (meter_name-> metric, statistic ->
aggregation_method, period-> granularity)
4. Change value from "average" to "mean" in method to compare
to the threshold

View File

@ -18,7 +18,7 @@ topology_template:
image: cirros-0.4.0-x86_64-disk
mgmt_driver: noop
availability_zone: nova
metadata: {metering.vnf: VDU1}
metadata: {metering.server_group: VDU1}
CP1:
type: tosca.nodes.nfv.CP.Tacker
@ -45,13 +45,14 @@ topology_template:
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
meter_name: cpu_util
metric: cpu_util
condition:
threshold: 50
constraint: utilization greater_than 50%
period: 600
granularity: 600
evaluations: 1
method: average
aggregation_method: mean
comparison_operator: gt
resource_type: instance
metadata: VDU1
action: [respawn, log]

View File

@ -18,7 +18,7 @@ topology_template:
image: cirros-0.4.0-x86_64-disk
mgmt_driver: noop
availability_zone: nova
metadata: {metering.vnf: VDU1}
metadata: {metering.server_group: VDU1}
CP1:
type: tosca.nodes.nfv.CP.Tacker
@ -45,13 +45,14 @@ topology_template:
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
meter_name: cpu_util
metric: cpu_util
condition:
threshold: 50
constraint: utilization greater_than 50%
period: 600
granularity: 300
evaluations: 1
method: average
aggregation_method: mean
resource_type: instance
comparison_operator: gt
metadata: VDU1
action: [respawn]

View File

@ -18,7 +18,7 @@ topology_template:
image: cirros-0.4.0-x86_64-disk
mgmt_driver: noop
availability_zone: nova
metadata: {metering.vnf: SG1}
metadata: {metering.server_group: SG1}
CP1:
type: tosca.nodes.nfv.CP.Tacker
@ -30,30 +30,6 @@ topology_template:
node: VL1
- virtualBinding:
node: VDU1
VDU2:
type: tosca.nodes.nfv.VDU.Tacker
capabilities:
nfv_compute:
properties:
disk_size: 1 GB
mem_size: 512 MB
num_cpus: 2
properties:
image: cirros-0.4.0-x86_64-disk
mgmt_driver: noop
availability_zone: nova
metadata: {metering.vnf: SG1}
CP2:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU2
VL1:
type: tosca.nodes.nfv.VL
@ -64,13 +40,13 @@ topology_template:
policies:
- SP1:
type: tosca.policies.tacker.Scaling
targets: [VDU1,VDU2]
targets: [VDU1]
properties:
increment: 1
cooldown: 120
min_instances: 1
max_instances: 3
default_instances: 2
default_instances: 1
- vdu_cpu_usage_monitoring_policy:
type: tosca.policies.tacker.Alarming
@ -79,13 +55,14 @@ topology_template:
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
meter_name: cpu_util
metric: cpu_util
condition:
threshold: 50
constraint: utilization greater_than 50%
period: 600
threshold: 80
constraint: utilization greater_than 80%
granularity: 60
evaluations: 1
method: average
aggregation_method: mean
resource_type: instance
comparison_operator: gt
metadata: SG1
action: [SP1]
@ -94,13 +71,14 @@ topology_template:
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
meter_name: cpu_util
metric: cpu_util
condition:
threshold: 10
constraint: utilization less_than 10%
period: 600
granularity: 60
evaluations: 1
method: average
aggregation_method: mean
resource_type: instance
comparison_operator: lt
metadata: SG1
action: [SP1]

View File

@ -149,6 +149,11 @@ class MetadataNotMatched(exceptions.InvalidInput):
message = _("Metadata for alarm policy is not matched")
class InvalidResourceType(exceptions.InvalidInput):
message = _("Resource type %(resource_type)s for alarm policy "
"is not supported")
class InvalidSubstitutionMapping(exceptions.InvalidInput):
message = _("Input for substitution mapping requirements are not"
" valid for %(requirement)s. They must be in the form"

View File

@ -18,7 +18,7 @@ topology_template:
image: cirros-0.4.0-x86_64-disk
mgmt_driver: noop
availability_zone: nova
metadata: {metering.vnf: VDU1}
metadata: {metering.server_group: VDU1}
CP1:
type: tosca.nodes.nfv.CP.Tacker
@ -45,13 +45,14 @@ topology_template:
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
meter_name: cpu_util
metric: cpu_util
condition:
threshold: 50
constraint: utilization greater_than 50%
period: 600
granularity: 600
evaluations: 1
method: average
aggregation_method: mean
resource_type: instance
comparison_operator: gt
metadata: VDU1
action: [respawn]

View File

@ -18,7 +18,7 @@ topology_template:
image: cirros-0.4.0-x86_64-disk
mgmt_driver: noop
availability_zone: nova
metadata: {metering.vnf: SG1}
metadata: {metering.server_group: SG1}
CP1:
type: tosca.nodes.nfv.CP.Tacker
@ -55,13 +55,14 @@ topology_template:
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
meter_name: cpu_util
metric: cpu_util
condition:
threshold: 50
constraint: utilization greater_than 50%
period: 600
granularity: 600
evaluations: 1
method: average
aggregation_method: mean
resource_type: instance
comparison_operator: gt
metadata: SG1
action: [SP1]
@ -70,13 +71,14 @@ topology_template:
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
meter_name: cpu_util
metric: cpu_util
condition:
threshold: 10
constraint: utilization less_than 10%
period: 600
granularity: 600
evaluations: 1
method: average
aggregation_method: mean
resource_type: instance
comparison_operator: lt
metadata: SG1
action: [SP1]

View File

@ -13,7 +13,6 @@
# under the License.
import json
import time
import unittest
from tacker.plugins.common import constants as evt_constants
from tacker.tests import constants
@ -139,13 +138,11 @@ class VnfTestAlarmMonitor(base.BaseTackerTest):
self.addCleanup(self.wait_until_vnf_delete, vnf_id,
constants.VNF_CIRROS_DELETE_TIMEOUT)
@unittest.skip("Skip to wait for Heat-translator to support gnocchi alarm")
def test_vnf_alarm_respawn(self):
self._test_vnf_tosca_alarm(
'sample-tosca-alarm-respawn.yaml',
'alarm and respawn-vnf')
@unittest.skip("Skip and wait for releasing Heat Translator")
def test_vnf_alarm_scale(self):
self._test_vnf_tosca_alarm(
'sample-tosca-alarm-scale.yaml',

View File

@ -14,13 +14,13 @@
import os
import testtools
import unittest
from tacker.tosca import utils
from toscaparser import tosca_template
from toscaparser.utils import yamlparser
from translator.hot import tosca_translator
from tacker.tosca import utils
class TestSamples(testtools.TestCase):
"""Sample tosca validation.
@ -82,11 +82,9 @@ class TestSamples(testtools.TestCase):
def test_scale_sample(self, tosca_file=['tosca-vnfd-scale.yaml']):
self._test_samples(tosca_file)
@unittest.skip("Skip and wait for releasing Heat Translator")
def test_alarm_sample(self, tosca_file=['tosca-vnfd-alarm-scale.yaml']):
self._test_samples(tosca_file)
@unittest.skip("Skip and wait for releasing Heat Translator")
def test_list_samples(self,
files=['tosca-vnfd-scale.yaml',
'tosca-vnfd-alarm-scale.yaml']):

View File

@ -14,7 +14,7 @@ resources:
- port: { get_resource: CP1 }
image: cirros-0.4.0-x86_64-disk
flavor: m1.tiny
metadata: {metering.vnf: SG1}
metadata: {metering.server_group: SG1-2e3261d9-14}
VL1:
type: OS::Neutron::Net
CP1:

View File

@ -29,13 +29,18 @@ resources:
ram: 512
vcpus: 2
vdu_hcpu_usage_respawning:
type: OS::Aodh::Alarm
type: OS::Aodh::GnocchiAggregationByResourcesAlarm
properties:
description: utilization greater_than 50%
meter_name: cpu_util
metric: cpu_util
threshold: 50
period: 60
statistic: avg
granularity: 60
aggregation_method: mean
resource_type: instance
evaluation_periods: 1
comparison_operator: gt
'matching_metadata': {'metadata.user_metadata.vnf': 'VDU1'}
query:
str_replace:
template: '{"=": {"server_group": "scaling_group_id"}}'
params:
scaling_group_id: VDU1-2e3261d9-1

View File

@ -18,7 +18,7 @@ resources:
networks:
- port: {get_resource: CP1}
user_data_format: SOFTWARE_CONFIG
metadata: {'metering.vnf': 'VDU1'}
metadata: {'metering.server_group': VDU1-2e3261d9-1}
type: OS::Nova::Server
CP1:
properties: {network: net_mgmt, port_security_enabled: false}
@ -30,13 +30,18 @@ resources:
ram: 512
vcpus: 2
vdu_hcpu_usage_respawning:
type: OS::Aodh::Alarm
type: OS::Aodh::GnocchiAggregationByResourcesAlarm
properties:
description: utilization greater_than 50%
meter_name: cpu_util
metric: cpu_util
threshold: 50
period: 60
statistic: avg
granularity: 60
aggregation_method: mean
resource_type: instance
evaluation_periods: 1
comparison_operator: gt
'matching_metadata': {'metadata.user_metadata.vnf': 'VDU1'}
query:
str_replace:
template: '{"=": {"server_group": "scaling_group_id"}}'
params:
scaling_group_id: VDU1-2e3261d9-1

View File

@ -30,24 +30,34 @@ resources:
type: OS::Heat::ScalingPolicy
vdu_hcpu_usage_scaling_out:
type: OS::Aodh::Alarm
type: OS::Aodh::GnocchiAggregationByResourcesAlarm
properties:
description: utilization greater_than 50%
meter_name: cpu_util
statistic: avg
period: 600
metric: cpu_util
aggregation_method: mean
granularity: 600
evaluation_periods: 1
threshold: 50
matching_metadata: {'metadata.user_metadata.vnf': SG1}
resource_type: instance
query:
str_replace:
template: '{"=": {"server_group": "scaling_group_id"}}'
params:
scaling_group_id: SG1-2e3261d9-14
comparison_operator: gt
vdu_lcpu_usage_scaling_in:
type: OS::Aodh::Alarm
type: OS::Aodh::GnocchiAggregationByResourcesAlarm
properties:
description: utilization less_than 10%
meter_name: cpu_util
statistic: avg
period: 600
metric: cpu_util
aggregation_method: mean
granularity: 600
evaluation_periods: 1
threshold: 10
matching_metadata: {'metadata.user_metadata.vnf': SG1}
resource_type: instance
query:
str_replace:
template: '{"=": {"server_group": "scaling_group_id"}}'
params:
scaling_group_id: SG1-2e3261d9-14
comparison_operator: lt

View File

@ -18,7 +18,7 @@ topology_template:
image: cirros-0.4.0-x86_64-disk
mgmt_driver: noop
availability_zone: nova
metadata: {metering.vnf: VDU1}
metadata: {metering.server_group: VDU1-2e3261d9-1}
CP1:
type: tosca.nodes.nfv.CP.Tacker
@ -45,13 +45,14 @@ topology_template:
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
meter_name: cpu_util
metric: cpu_util
condition:
threshold: 50
constraint: utilization greater_than 50%
period: 600
granularity: 600
evaluations: 1
method: average
aggregation_method: mean
resource_type: instance
comparison_operator: gt
metadata: VDU1
metadata: VDU1-2e3261d9-1
actions: [respawn, log]

View File

@ -18,7 +18,7 @@ topology_template:
image: cirros-0.4.0-x86_64-disk
mgmt_driver: noop
availability_zone: nova
metadata: {metering.vnf: VDU1}
metadata: {metering.server_group: VDU1-2e3261d9-1}
CP1:
type: tosca.nodes.nfv.CP.Tacker
@ -45,13 +45,14 @@ topology_template:
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
meter_name: cpu_util
metric: cpu_util
condition:
threshold: 50
constraint: utilization greater_than 50%
period: 600
granularity: 600
evaluations: 1
method: average
aggregation_method: mean
resource_type: instance
comparison_operator: gt
metadata: VDU1
metadata: VDU1-2e3261d9-1
action: [respawn]

View File

@ -18,7 +18,7 @@ topology_template:
image: cirros-0.4.0-x86_64-disk
mgmt_driver: noop
availability_zone: nova
metadata: {metering.vnf: SG1}
metadata: {metering.server_group: SG1-2e3261d9-14}
CP1:
type: tosca.nodes.nfv.CP.Tacker
@ -55,13 +55,14 @@ topology_template:
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
meter_name: cpu_util
metric: cpu_util
condition:
threshold: 50
constraint: utilization greater_than 50%
period: 600
granularity: 600
evaluations: 1
method: average
aggregation_method: mean
resource_type: instance
comparison_operator: gt
metadata: SG1
metadata: SG1-2e3261d9-14
action: [SP1]

View File

@ -22,7 +22,6 @@ topology_template:
mgmt_driver: noop
availability_zone: nova
CP1:
type: tosca.nodes.nfv.CP.Tacker
properties:
@ -48,13 +47,14 @@ topology_template:
event_type:
type: tosca.events.resource.utilization
implementation: Ceilometer
meter_name: cpu_util
metric: cpu_util
condition:
threshold: 50
constraint: utilization greater_than 50%
period: 60
granularity: 60
evaluations: 1
method: average
aggregation_method: mean
resource_type: instance
comparison_operator: gt
metadata: VDU1
metadata: VDU1-2e3261d9-1
action: ''

View File

@ -18,7 +18,7 @@ topology_template:
image: cirros-0.4.0-x86_64-disk
mgmt_driver: noop
availability_zone: nova
metadata: {metering.vnf: VDU1}
metadata: {metering.server_group: VDU1-2e3261d9-1}
CP1:
@ -46,13 +46,14 @@ topology_template:
event_type:
type: tosca.events.resource.utilization
implementation: Ceilometer
meter_name: cpu_util
metric: cpu_util
condition:
threshold: 50
constraint: utilization greater_than 50%
period: 60
granularity: 60
evaluations: 1
method: average
aggregation_method: mean
resource_type: instance
comparison_operator: gt
metadata: VDU1
metadata: VDU1-2e3261d9-1
action: ''

View File

@ -14,7 +14,7 @@ topology_template:
mgmt_driver: noop
availability_zone: nova
flavor: m1.tiny
metadata: {metering.vnf: SG1}
metadata: {metering.server_group: SG1-2e3261d9-14}
CP1:
type: tosca.nodes.nfv.CP.Tacker
@ -51,28 +51,30 @@ topology_template:
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
meter_name: cpu_util
metric: cpu_util
condition:
threshold: 50
constraint: utilization greater_than 50%
period: 600
granularity: 600
evaluations: 1
method: average
aggregation_method: mean
resource_type: instance
comparison_operator: gt
metadata: SG1
metadata: SG1-2e3261d9-14
action: [SP1]
vdu_lcpu_usage_scaling_in:
event_type:
type: tosca.events.resource.utilization
implementation: ceilometer
meter_name: cpu_util
metric: cpu_util
condition:
threshold: 10
constraint: utilization less_than 10%
period: 600
granularity: 600
evaluations: 1
method: average
aggregation_method: mean
resource_type: instance
comparison_operator: lt
metadata: SG1
metadata: SG1-2e3261d9-14
action: [SP1]

View File

@ -17,7 +17,6 @@ import codecs
import json
import mock
import os
import unittest
import yaml
from tacker import context
@ -447,7 +446,6 @@ class TestOpenStack(base.TestCase):
'hot_tosca_mac_ip.yaml'
)
@unittest.skip("Skip and wait for releasing Heat Translator")
def test_create_tosca_alarm_respawn(self):
self._test_assert_equal_for_tosca_templates(
'tosca_alarm_respawn.yaml',
@ -455,7 +453,6 @@ class TestOpenStack(base.TestCase):
is_monitor=False
)
@unittest.skip("Skip and wait for releasing Heat Translator")
def test_create_tosca_alarm_scale(self):
self._test_assert_equal_for_tosca_templates(
'tosca_alarm_scale.yaml',
@ -464,7 +461,6 @@ class TestOpenStack(base.TestCase):
is_monitor=False
)
@unittest.skip("Skip and wait for releasing Heat Translator")
def test_create_tosca_with_alarm_monitoring_not_matched(self):
self.assertRaises(vnfm.MetadataNotMatched,
self._test_assert_equal_for_tosca_templates,

View File

@ -183,50 +183,67 @@ def get_vdu_applicationmonitoring(template):
@log.log
def get_vdu_metadata(template):
def get_vdu_metadata(template, unique_id=None):
metadata = dict()
metadata.setdefault('vdus', {})
for nt in template.nodetemplates:
if nt.type_definition.is_derived_from(TACKERVDU):
metadata_dict = nt.get_property_value('metadata') or None
if metadata_dict:
metadata_dict['metering.server_group'] = \
(metadata_dict['metering.server_group'] + '-'
+ unique_id)[:15]
metadata['vdus'][nt.name] = {}
metadata['vdus'][nt.name].update(metadata_dict)
return metadata
@log.log
def pre_process_alarm_resources(vnf, template, vdu_metadata):
def pre_process_alarm_resources(vnf, template, vdu_metadata, unique_id=None):
alarm_resources = dict()
matching_metadata = dict()
query_metadata = dict()
alarm_actions = dict()
for policy in template.policies:
if (policy.type_definition.is_derived_from(MONITORING)):
matching_metadata =\
_process_matching_metadata(vdu_metadata, policy)
if policy.type_definition.is_derived_from(MONITORING):
query_metadata = _process_query_metadata(
vdu_metadata, policy, unique_id)
alarm_actions = _process_alarm_actions(vnf, policy)
alarm_resources['matching_metadata'] = matching_metadata
alarm_resources['query_metadata'] = query_metadata
alarm_resources['alarm_actions'] = alarm_actions
return alarm_resources
def _process_matching_metadata(metadata, policy):
matching_mtdata = dict()
def _process_query_metadata(metadata, policy, unique_id):
query_mtdata = dict()
triggers = policy.entity_tpl['triggers']
for trigger_name, trigger_dict in triggers.items():
if not (trigger_dict.get('metadata') and metadata):
raise vnfm.MetadataNotMatched()
is_matched = False
for vdu_name, metadata_dict in metadata['vdus'].items():
if trigger_dict['metadata'] ==\
metadata_dict['metering.vnf']:
is_matched = True
if not is_matched:
raise vnfm.MetadataNotMatched()
matching_mtdata[trigger_name] = dict()
matching_mtdata[trigger_name]['metadata.user_metadata.vnf'] =\
trigger_dict['metadata']
return matching_mtdata
resource_type = trigger_dict.get('condition').get('resource_type')
# TODO(phuoc): currently, Tacker only supports resource_type with
# instance value. Other types such as instance_network_interface,
# instance_disk can be supported in the future.
if resource_type == 'instance':
if not (trigger_dict.get('metadata') and metadata):
raise vnfm.MetadataNotMatched()
is_matched = False
for vdu_name, metadata_dict in metadata['vdus'].items():
trigger_dict['metadata'] = \
(trigger_dict['metadata'] + '-' + unique_id)[:15]
if trigger_dict['metadata'] == \
metadata_dict['metering.server_group']:
is_matched = True
if not is_matched:
raise vnfm.MetadataNotMatched()
query_template = dict()
query_template['str_replace'] = dict()
query_template['str_replace']['template'] = \
'{"=": {"server_group": "scaling_group_id"}}'
scaling_group_param = \
{'scaling_group_id': trigger_dict['metadata']}
query_template['str_replace']['params'] = scaling_group_param
else:
raise vnfm.InvalidResourceType(resource_type=resource_type)
query_mtdata[trigger_name] = query_template
return query_mtdata
def _process_alarm_actions(vnf, policy):
@ -387,8 +404,8 @@ def represent_odict(dump, tag, mapping, flow_style=None):
@log.log
def post_process_heat_template(heat_tpl, mgmt_ports, metadata,
alarm_resources, res_tpl,
vol_res={}, unsupported_res_prop=None):
alarm_resources, res_tpl, vol_res={},
unsupported_res_prop=None, unique_id=None):
#
# TODO(bobh) - remove when heat-translator can support literal strings.
#
@ -412,19 +429,21 @@ def post_process_heat_template(heat_tpl, mgmt_ports, metadata,
LOG.debug('Added output for %s', outputname)
if metadata:
for vdu_name, metadata_dict in metadata['vdus'].items():
metadata_dict['metering.server_group'] = \
(metadata_dict['metering.server_group'] + '-' + unique_id)[:15]
if heat_dict['resources'].get(vdu_name):
heat_dict['resources'][vdu_name]['properties']['metadata'] =\
metadata_dict
matching_metadata = alarm_resources.get('matching_metadata')
query_metadata = alarm_resources.get('query_metadata')
alarm_actions = alarm_resources.get('alarm_actions')
if matching_metadata:
for trigger_name, matching_metadata_dict in matching_metadata.items():
if query_metadata:
for trigger_name, matching_metadata_dict in query_metadata.items():
if heat_dict['resources'].get(trigger_name):
matching_mtdata = dict()
matching_mtdata['matching_metadata'] =\
matching_metadata[trigger_name]
query_mtdata = dict()
query_mtdata['query'] = \
query_metadata[trigger_name]
heat_dict['resources'][trigger_name]['properties'].\
update(matching_mtdata)
update(query_mtdata)
if alarm_actions:
for trigger_name, alarm_actions_dict in alarm_actions.items():
if heat_dict['resources'].get(trigger_name):

View File

@ -335,6 +335,12 @@ class OpenStack(abstract_driver.DeviceAbstractDriver,
if events[0].id != last_event_id:
if events[0].resource_status == 'SIGNAL_COMPLETE':
break
else:
# When the number of instance reaches min or max, the below
# comparision will let VNF status turn into ACTIVE state.
if events[0].resource_status == 'CREATE_COMPLETE' or \
events[0].resource_status == 'SIGNAL_COMPLETE':
break
except Exception as e:
error_reason = _("VNF scaling failed for stack %(stack)s with "
"error %(error)s") % {

View File

@ -13,6 +13,7 @@
from oslo_config import cfg
from oslo_log import log as logging
from oslo_serialization import jsonutils
from oslo_utils import uuidutils
from toscaparser import tosca_template
from toscaparser.utils import yamlparser
from translator.hot import tosca_translator
@ -275,9 +276,11 @@ class TOSCAToHOT(object):
LOG.debug("tosca-parser error: %s", str(e))
raise vnfm.ToscaParserFailed(error_msg_details=str(e))
metadata = toscautils.get_vdu_metadata(tosca)
alarm_resources =\
toscautils.pre_process_alarm_resources(self.vnf, tosca, metadata)
unique_id = uuidutils.generate_uuid()
metadata = toscautils.get_vdu_metadata(tosca, unique_id=unique_id)
alarm_resources = toscautils.pre_process_alarm_resources(
self.vnf, tosca, metadata, unique_id=unique_id)
monitoring_dict = toscautils.get_vdu_monitoring(tosca)
mgmt_ports = toscautils.get_mgmt_ports(tosca)
nested_resource_name = toscautils.get_nested_resources_name(tosca)
@ -319,7 +322,8 @@ class TOSCAToHOT(object):
heat_template_yaml = toscautils.post_process_heat_template(
heat_template_yaml, mgmt_ports, metadata, alarm_resources,
res_tpl, block_storage_details, self.unsupported_props)
res_tpl, block_storage_details, self.unsupported_props,
unique_id=unique_id)
self.heat_template_yaml = heat_template_yaml
self.monitoring_dict = monitoring_dict

View File

@ -812,14 +812,8 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
return trigger
def _handle_vnf_monitoring(self, context, trigger):
vnf_dict = trigger['vnf']
if trigger['action_name'] in constants.DEFAULT_ALARM_ACTIONS:
action = trigger['action_name']
LOG.debug('vnf for monitoring: %s', vnf_dict)
self._vnf_action.invoke(
action, 'execute_action', plugin=self, context=context,
vnf_dict=vnf_dict, args={})
vnf_dict = trigger['vnf']
# Multiple actions support
if trigger.get('policy_actions'):
policy_actions = trigger['policy_actions']