Implementation Fenix plugin in Tacker

Add fenix plugin for host maintenance.
This feature creates plugin for fenix, create_vnf_maintenance() in VNFM and
VNFMaintenanceAlarmMonitor to create alarm for Fenix. And the feature modifies
alarm_receiver and CRUD in VNFM.

After this feature, all VNF has ALL_MAINTENANCE resource to interacts
with Fenix plugin and [VDU_NAME]_MAINTENANCE if VDU has maintenance property.
[VDU_NAME]_MAINTENANCE will use to perform VNF software modification.

Currently, the plugin can perform CRUD constraints for maintenance,
scale in/out and migration for MIGRATE and LIVE_MIGRATE. The feature has
functions for OWN_ACTION with modified healing, but it not works based on
default vnf workflow in Fenix. And The feature doesn't support server_group
and related with HA like switch over because of unsupported in Tacker.
So these features will be enhance after adding required.

Co-Authored-By: Hyunsik Yang <yangun@dcn.ssu.ac.kr>

Implements: blueprint vnf-rolling-upgrade
Change-Id: I34b82fd40830dd74d0f5ef24a60b3ff465cd4819
This commit is contained in:
Jangwon Lee 2020-03-10 20:45:06 +09:00 committed by JangwonLee
parent fbadb8402e
commit df0ba6b7e0
28 changed files with 1321 additions and 35 deletions

View File

@ -67,6 +67,7 @@
- openstack/python-tackerclient - openstack/python-tackerclient
- openstack/tacker - openstack/tacker
- openstack/tacker-horizon - openstack/tacker-horizon
- x/fenix
vars: vars:
devstack_localrc: devstack_localrc:
CELLSV2_SETUP: singleconductor CELLSV2_SETUP: singleconductor
@ -93,6 +94,7 @@
mistral: https://opendev.org/openstack/mistral mistral: https://opendev.org/openstack/mistral
tacker: https://opendev.org/openstack/tacker tacker: https://opendev.org/openstack/tacker
blazar: https://opendev.org/openstack/blazar blazar: https://opendev.org/openstack/blazar
fenix: https://opendev.org/x/fenix
devstack_services: devstack_services:
# Core services enabled for this branch. # Core services enabled for this branch.
# This list replaces the test-matrix. # This list replaces the test-matrix.

View File

@ -81,6 +81,7 @@ TACKER_NOVA_CA_CERTIFICATES_FILE=${TACKER_NOVA_CA_CERTIFICATES_FILE:-}
TACKER_NOVA_API_INSECURE=${TACKER_NOVA_API_INSECURE:-False} TACKER_NOVA_API_INSECURE=${TACKER_NOVA_API_INSECURE:-False}
HEAT_CONF_DIR=/etc/heat HEAT_CONF_DIR=/etc/heat
CEILOMETER_CONF_DIR=/etc/ceilometer
source ${TACKER_DIR}/tacker/tests/contrib/post_test_hook_lib.sh source ${TACKER_DIR}/tacker/tests/contrib/post_test_hook_lib.sh
@ -480,3 +481,11 @@ function modify_heat_flavor_policy_rule {
# Allow non-admin projects with 'admin' roles to create flavors in Heat # Allow non-admin projects with 'admin' roles to create flavors in Heat
echo '"resource_types:OS::Nova::Flavor": "role:admin"' >> $policy_file echo '"resource_types:OS::Nova::Flavor": "role:admin"' >> $policy_file
} }
function configure_maintenance_event_types {
local event_definitions_file=$CEILOMETER_CONF_DIR/event_definitions.yaml
local maintenance_events_file=$TACKER_DIR/etc/ceilometer/maintenance_event_types.yaml
echo "Configure maintenance event types to $event_definitions_file"
cat $maintenance_events_file >> $event_definitions_file
}

View File

@ -43,12 +43,16 @@ enable_plugin mistral https://opendev.org/openstack/mistral master
# Ceilometer # Ceilometer
#CEILOMETER_PIPELINE_INTERVAL=300 #CEILOMETER_PIPELINE_INTERVAL=300
CEILOMETER_EVENT_ALARM=True
enable_plugin ceilometer https://opendev.org/openstack/ceilometer master enable_plugin ceilometer https://opendev.org/openstack/ceilometer master
enable_plugin aodh https://opendev.org/openstack/aodh master enable_plugin aodh https://opendev.org/openstack/aodh master
# Blazar # Blazar
enable_plugin blazar https://github.com/openstack/blazar.git master enable_plugin blazar https://github.com/openstack/blazar.git master
# Fenix
enable_plugin fenix https://opendev.org/x/fenix.git master
# Tacker # Tacker
enable_plugin tacker https://opendev.org/openstack/tacker master enable_plugin tacker https://opendev.org/openstack/tacker master

View File

@ -41,6 +41,11 @@ if is_service_enabled tacker; then
tacker_check_and_download_images tacker_check_and_download_images
echo_summary "Registering default VIM" echo_summary "Registering default VIM"
tacker_register_default_vim tacker_register_default_vim
if is_service_enabled ceilometer; then
echo_summary "Configure maintenance event types"
configure_maintenance_event_types
fi
fi fi
fi fi

View File

@ -24,3 +24,4 @@ Reference
mistral_workflows_usage_guide.rst mistral_workflows_usage_guide.rst
block_storage_usage_guide.rst block_storage_usage_guide.rst
reservation_policy_usage_guide.rst reservation_policy_usage_guide.rst
maintenance_usage_guide.rst

View File

@ -0,0 +1,183 @@
..
Copyright 2020 Distributed Cloud and Network (DCN)
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
================================
VNF zero impact host maintenance
================================
Tacker allows you to maintenance host with VNF zero impact. Maintenance
workflows will be performed in the ``Fenix`` service by creating a session
which can do scaling, migrating VNFs and patch hosts.
References
~~~~~~~~~~
- `Fenix <https://fenix.readthedocs.io/en/latest/>`_.
- `Fenix Configuration Guide <https://fenix.readthedocs.io/en/latest/configuration/dependencies.html>`_.
Installation and configurations
-------------------------------
1. You need Fenix, Ceilometer and Aodh OpenStack services.
2. Modify the below configuration files:
/etc/ceilometer/event_pipeline.yaml
.. code-block:: yaml
sinks:
- name: event_sink
publishers:
- panko://
- notifier://
- notifier://?topic=alarm.all
/etc/ceilometer/event_definitions.yaml:
.. code-block:: yaml
- event_type: 'maintenance.scheduled'
traits:
service:
fields: payload.service
allowed_actions:
fields: payload.allowed_actions
instance_ids:
fields: payload.instance_ids
reply_url:
fields: payload.reply_url
state:
fields: payload.state
session_id:
fields: payload.session_id
actions_at:
fields: payload.actions_at
type: datetime
project_id:
fields: payload.project_id
reply_at:
fields: payload.reply_at
type: datetime
metadata:
fields: payload.metadata
- event_type: 'maintenance.host'
traits:
host:
fields: payload.host
project_id:
fields: payload.project_id
session_id:
fields: payload.session_id
state:
fields: payload.state
Deploying maintenance tosca template with tacker
------------------------------------------------
When template is normal
~~~~~~~~~~~~~~~~~~~~~~~
If ``Fenix`` service is enabled and maintenance event_types are defined, then
all VNF created by legacy VNFM will get ``ALL_MAINTENANCE`` resource in Stack.
.. code-block:: yaml
resources:
ALL_maintenance:
properties:
alarm_actions:
- http://openstack-master:9890/v1.0/vnfs/e8b9bec5-541b-492c-954e-cd4af71eda1f/maintenance/0cc65f4bba9c42bfadf4aebec6ae7348/hbyhgkav
event_type: maintenance.scheduled
type: OS::Aodh::EventAlarm
When template has maintenance property
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If VDU in VNFD has maintenance property, then VNFM creates
``[VDU_NAME]_MAINTENANCE`` alarm resources and will be use for VNF software
modification later. This is not works yet. It will be updated.
``Sample tosca-template``:
.. code-block:: yaml
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: VNF TOSCA template with maintenance
metadata:
template_name: sample-tosca-vnfd-maintenance
topology_template:
node_templates:
VDU1:
type: tosca.nodes.nfv.VDU.Tacker
properties:
maintenance: True
image: cirros-0.4.0-x86_64-disk
capabilities:
nfv_compute:
properties:
disk_size: 1 GB
mem_size: 512 MB
num_cpus: 2
CP1:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
order: 0
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1
VL1:
type: tosca.nodes.nfv.VL
properties:
network_name: net_mgmt
vendor: Tacker
policies:
- SP1:
type: tosca.policies.tacker.Scaling
properties:
increment: 1
cooldown: 120
min_instances: 1
max_instances: 3
default_instances: 2
targets: [VDU1]
Configure maintenance constraints with config yaml
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When ``Fenix`` does maintenance, it requires some constraints for zero impact.
Like below config file, each VNF can set and update constraints.
.. code-block:: yaml
maintenance:
max_impacted_members: 1
recovery_time: 60,
mitigation_type: True,
lead_time: 120,
migration_type: 'MIGRATE'

View File

@ -0,0 +1,34 @@
- event_type: 'maintenance.scheduled'
traits:
service:
fields: payload.service
allowed_actions:
fields: payload.allowed_actions
instance_ids:
fields: payload.instance_ids
reply_url:
fields: payload.reply_url
state:
fields: payload.state
session_id:
fields: payload.session_id
actions_at:
fields: payload.actions_at
type: datetime
project_id:
fields: payload.project_id
reply_at:
fields: payload.reply_at
type: datetime
metadata:
fields: payload.metadata
- event_type: 'maintenance.host'
traits:
host:
fields: payload.host
project_id:
fields: payload.project_id
session_id:
fields: payload.session_id
state:
fields: payload.state

View File

@ -93,6 +93,8 @@ oslo.config.opts =
tacker.vnfm.monitor_drivers.ceilometer.ceilometer = tacker.vnfm.monitor_drivers.ceilometer.ceilometer:config_opts tacker.vnfm.monitor_drivers.ceilometer.ceilometer = tacker.vnfm.monitor_drivers.ceilometer.ceilometer:config_opts
tacker.vnfm.monitor_drivers.zabbix.zabbix = tacker.vnfm.monitor_drivers.zabbix.zabbix:config_opts tacker.vnfm.monitor_drivers.zabbix.zabbix = tacker.vnfm.monitor_drivers.zabbix.zabbix:config_opts
tacker.alarm_receiver = tacker.alarm_receiver:config_opts tacker.alarm_receiver = tacker.alarm_receiver:config_opts
tacker.plugins.fenix = tacker.plugins.fenix:config_opts
mistral.actions = mistral.actions =
tacker.vim_ping_action = tacker.nfvo.workflows.vim_monitor.vim_ping_action:PingVimAction tacker.vim_ping_action = tacker.nfvo.workflows.vim_monitor.vim_ping_action:PingVimAction

View File

@ -52,6 +52,8 @@ class AlarmReceiver(wsgi.Middleware):
if not self.handle_url(url): if not self.handle_url(url):
return return
prefix, info, params = self.handle_url(req.url) prefix, info, params = self.handle_url(req.url)
resource = 'trigger' if info[4] != 'maintenance' else 'maintenance'
redirect = resource + 's'
auth = cfg.CONF.keystone_authtoken auth = cfg.CONF.keystone_authtoken
alarm_auth = cfg.CONF.alarm_auth alarm_auth = cfg.CONF.alarm_auth
token = Token(username=alarm_auth.username, token = Token(username=alarm_auth.username,
@ -66,19 +68,24 @@ class AlarmReceiver(wsgi.Middleware):
# Change the body request # Change the body request
if req.body: if req.body:
body_dict = dict() body_dict = dict()
body_dict['trigger'] = {} body_dict[resource] = {}
body_dict['trigger'].setdefault('params', {}) body_dict[resource].setdefault('params', {})
# Update params in the body request # Update params in the body request
body_info = jsonutils.loads(req.body) body_info = jsonutils.loads(req.body)
body_dict['trigger']['params']['data'] = body_info body_dict[resource]['params']['credential'] = info[6]
body_dict['trigger']['params']['credential'] = info[6] if resource == 'maintenance':
# Update policy and action body_info.update({
body_dict['trigger']['policy_name'] = info[4] 'body': self._handle_maintenance_body(body_info)})
body_dict['trigger']['action_name'] = info[5] del body_info['reason_data']
else:
# Update policy and action
body_dict[resource]['policy_name'] = info[4]
body_dict[resource]['action_name'] = info[5]
body_dict[resource]['params']['data'] = body_info
req.body = jsonutils.dump_as_bytes(body_dict) req.body = jsonutils.dump_as_bytes(body_dict)
LOG.debug('Body alarm: %s', req.body) LOG.debug('Body alarm: %s', req.body)
# Need to change url because of mandatory # Need to change url because of mandatory
req.environ['PATH_INFO'] = prefix + 'triggers' req.environ['PATH_INFO'] = prefix + redirect
req.environ['QUERY_STRING'] = '' req.environ['QUERY_STRING'] = ''
LOG.debug('alarm url in receiver: %s', req.url) LOG.debug('alarm url in receiver: %s', req.url)
@ -98,3 +105,15 @@ class AlarmReceiver(wsgi.Middleware):
prefix_url = '/%(collec)s/%(vnf_uuid)s/' % {'collec': p[2], prefix_url = '/%(collec)s/%(vnf_uuid)s/' % {'collec': p[2],
'vnf_uuid': p[3]} 'vnf_uuid': p[3]}
return prefix_url, p, params return prefix_url, p, params
def _handle_maintenance_body(self, body_info):
body = {}
traits_list = body_info['reason_data']['event']['traits']
if type(traits_list) is not list:
return
for key, t_type, val in traits_list:
if t_type == 1 and val and (val[0] == '[' or val[0] == '{'):
body[key] = eval(val)
else:
body[key] = val
return body

View File

@ -208,6 +208,10 @@ class InvalidInstReqInfoForScaling(exceptions.InvalidInput):
"fixed ip_address or mac_address.") "fixed ip_address or mac_address.")
class InvalidMaintenanceParameter(exceptions.InvalidInput):
message = _("Could not find the required params for maintenance")
def _validate_service_type_list(data, valid_values=None): def _validate_service_type_list(data, valid_values=None):
if not isinstance(data, list): if not isinstance(data, list):
msg = _("Invalid data format for service list: '%s'") % data msg = _("Invalid data format for service list: '%s'") % data
@ -491,6 +495,37 @@ SUB_RESOURCE_ATTRIBUTE_MAP = {
} }
} }
} }
},
'maintenances': {
'parent': {
'collection_name': 'vnfs',
'member_name': 'vnf'
},
'members': {
'maintenance': {
'parameters': {
'params': {
'allow_post': True,
'allow_put': False,
'is_visible': True,
'validate': {'type:dict_or_none': None}
},
'tenant_id': {
'allow_post': True,
'allow_put': False,
'validate': {'type:string': None},
'required_by_policy': False,
'is_visible': False
},
'response': {
'allow_post': False,
'allow_put': False,
'validate': {'type:dict_or_none': None},
'is_visible': True
}
}
}
}
} }
} }
@ -623,3 +658,7 @@ class VNFMPluginBase(service_base.NFVPluginBase):
def create_vnf_trigger( def create_vnf_trigger(
self, context, vnf_id, trigger): self, context, vnf_id, trigger):
pass pass
@abc.abstractmethod
def create_vnf_maintenance(self, context, vnf_id, maintenance):
pass

View File

@ -36,11 +36,13 @@ class HealVnfRequest(base.TackerObject):
# Version 1.0: Initial version # Version 1.0: Initial version
# Version 1.1: Added vnf_instance_id # Version 1.1: Added vnf_instance_id
VERSION = '1.1' # Version 1.2: Added stack_id for nested heat-template
VERSION = '1.2'
fields = { fields = {
'vnfc_instance_id': fields.ListOfStringsField(nullable=True, 'vnfc_instance_id': fields.ListOfStringsField(nullable=True,
default=[]), default=[]),
'stack_id': fields.StringField(nullable=True, default=''),
'cause': fields.StringField(nullable=True, default=None), 'cause': fields.StringField(nullable=True, default=None),
'additional_params': fields.ListOfObjectsField( 'additional_params': fields.ListOfObjectsField(
'HealVnfAdditionalParams', default=[]) 'HealVnfAdditionalParams', default=[])

View File

@ -30,6 +30,7 @@ COMMON_PREFIXES = {
# Service operation status constants # Service operation status constants
ACTIVE = "ACTIVE" ACTIVE = "ACTIVE"
ACK = "ACK"
PENDING_CREATE = "PENDING_CREATE" PENDING_CREATE = "PENDING_CREATE"
PENDING_UPDATE = "PENDING_UPDATE" PENDING_UPDATE = "PENDING_UPDATE"
@ -40,6 +41,7 @@ PENDING_HEAL = "PENDING_HEAL"
DEAD = "DEAD" DEAD = "DEAD"
ERROR = "ERROR" ERROR = "ERROR"
NACK = "NACK"
ACTIVE_PENDING_STATUSES = ( ACTIVE_PENDING_STATUSES = (
ACTIVE, ACTIVE,
@ -72,6 +74,10 @@ RES_EVT_SCALE = "SCALE"
RES_EVT_NA_STATE = "Not Applicable" RES_EVT_NA_STATE = "Not Applicable"
RES_EVT_ONBOARDED = "OnBoarded" RES_EVT_ONBOARDED = "OnBoarded"
RES_EVT_HEAL = "HEAL" RES_EVT_HEAL = "HEAL"
RES_EVT_MAINTENANCE = [
"MAINTENANCE", "SCALE_IN", "MAINTENANCE_COMPLETE",
"PREPARE_MAINTENANCE", "PLANNED_MAINTENANCE", "INSTANCE_ACTION_DONE"
]
VNF_STATUS_TO_EVT_TYPES = {PENDING_CREATE: RES_EVT_CREATE, VNF_STATUS_TO_EVT_TYPES = {PENDING_CREATE: RES_EVT_CREATE,

456
tacker/plugins/fenix.py Normal file
View File

@ -0,0 +1,456 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import requests
import time
import yaml
from oslo_config import cfg
from oslo_serialization import jsonutils
from tacker.common import clients
from tacker.common import log
from tacker.extensions import vnfm
from tacker.plugins.common import constants
from tacker.vnfm import vim_client
CONF = cfg.CONF
OPTS = [
cfg.IntOpt('lead_time', default=120,
help=_('Time for migration_type operation')),
cfg.IntOpt('max_interruption_time', default=120,
help=_('Time for how long live migration can take')),
cfg.IntOpt('recovery_time', default=2,
help=_('Time for migrated node could be fully running state')),
cfg.IntOpt('request_retries',
default=5,
help=_("Number of attempts to retry for request")),
cfg.IntOpt('request_retry_wait',
default=5,
help=_("Wait time (in seconds) between consecutive request"))
]
CONF.register_opts(OPTS, 'fenix')
MAINTENANCE_KEYS = (
'instance_ids', 'session_id', 'state', 'reply_url'
)
MAINTENANCE_SUB_KEYS = {
'PREPARE_MAINTENANCE': [('allowed_actions', 'list'),
('instance_ids', 'list')],
'PLANNED_MAINTENANCE': [('allowed_actions', 'list'),
('instance_ids', 'list')]
}
def config_opts():
return [('fenix', OPTS)]
class FenixPlugin(object):
def __init__(self):
self.REQUEST_RETRIES = cfg.CONF.fenix.request_retries
self.REQUEST_RETRY_WAIT = cfg.CONF.fenix.request_retry_wait
self.endpoint = None
self._instances = {}
self.vim_client = vim_client.VimClient()
@log.log
def request(self, plugin, context, vnf_dict, maintenance={},
data_func=None):
params_list = [maintenance]
method = 'put'
is_reply = True
if data_func:
action, create_func = data_func.split('_', 1)
create_func = '_create_%s_list' % create_func
if action in ['update', 'delete'] and hasattr(self, create_func):
params_list = getattr(self, create_func)(
context, vnf_dict, action)
method = action if action == 'delete' else 'put'
is_reply = False
for params in params_list:
self._request(plugin, context, vnf_dict, params, method, is_reply)
return len(params_list)
@log.log
def create_vnf_constraints(self, plugin, context, vnf_dict):
self.update_vnf_constraints(plugin, context, vnf_dict,
objects=['instance_group',
'project_instance'])
@log.log
def delete_vnf_constraints(self, plugin, context, vnf_dict):
self.update_vnf_constraints(plugin, context, vnf_dict,
action='delete',
objects=['instance_group',
'project_instance'])
@log.log
def update_vnf_instances(self, plugin, context, vnf_dict,
action='update'):
requests = self.update_vnf_constraints(plugin, context,
vnf_dict, action,
objects=['project_instance'])
if requests[0]:
self.post(context, vnf_dict)
@log.log
def update_vnf_constraints(self, plugin, context, vnf_dict,
action='update', objects=[]):
result = []
for obj in objects:
requests = self.request(plugin, context, vnf_dict,
data_func='%s_%s' % (action, obj))
result.append(requests)
return result
@log.log
def post(self, context, vnf_dict, **kwargs):
post_function = getattr(context, 'maintenance_post_function', None)
if not post_function:
return
post_function(context, vnf_dict)
del context.maintenance_post_function
@log.log
def project_instance_pre(self, context, vnf_dict):
key = vnf_dict['id']
if key not in self._instances:
self._instances.update({
key: self._get_instances(context, vnf_dict)})
@log.log
def validate_maintenance(self, maintenance):
body = maintenance['maintenance']['params']['data']['body']
if not set(MAINTENANCE_KEYS).issubset(body) or \
body['state'] not in constants.RES_EVT_MAINTENANCE:
raise vnfm.InvalidMaintenanceParameter()
sub_keys = MAINTENANCE_SUB_KEYS.get(body['state'], ())
for key, val_type in sub_keys:
if key not in body or type(body[key]) is not eval(val_type):
raise vnfm.InvalidMaintenanceParameter()
return body
@log.log
def _request(self, plugin, context, vnf_dict, maintenance,
method='put', is_reply=True):
client = self._get_openstack_clients(context, vnf_dict)
if not self.endpoint:
self.endpoint = client.keystone_session.get_endpoint(
service_type='maintenance', region_name=client.region_name)
if not self.endpoint:
raise vnfm.ServiceTypeNotFound(service_type_id='maintenance')
if 'reply_url' in maintenance:
url = maintenance['reply_url']
elif 'url' in maintenance:
url = "%s/%s" % (self.endpoint.rstrip('/'),
maintenance['url'].strip('/'))
else:
return
def create_headers():
return {
'X-Auth-Token': client.keystone_session.get_token(),
'Content-Type': 'application/json',
'Accept': 'application/json'
}
request_body = {}
request_body['headers'] = create_headers()
state = constants.ACK if vnf_dict['status'] == constants.ACTIVE \
else constants.NACK
if method == 'put':
data = maintenance.get('data', {})
if is_reply:
data['session_id'] = maintenance.get('session_id', '')
data['state'] = "%s_%s" % (state, maintenance['state'])
request_body['data'] = jsonutils.dump_as_bytes(data)
def request_wait():
retries = self.REQUEST_RETRIES
while retries > 0:
response = getattr(requests, method)(url, **request_body)
if response.status_code == 200:
break
else:
retries -= 1
time.sleep(self.REQUEST_RETRY_WAIT)
plugin.spawn_n(request_wait)
@log.log
def handle_maintenance(self, plugin, context, maintenance):
action = '_create_%s' % maintenance['state'].lower()
maintenance['data'] = {}
if hasattr(self, action):
getattr(self, action)(plugin, context, maintenance)
@log.log
def _create_maintenance(self, plugin, context, maintenance):
vnf_dict = maintenance.get('vnf', {})
vnf_dict['attributes'].update({'maintenance_scaled': 0})
plugin._update_vnf_post(context, vnf_dict['id'], constants.ACTIVE,
vnf_dict, constants.ACTIVE,
constants.RES_EVT_UPDATE)
instances = self._get_instances(context, vnf_dict)
instance_ids = [x['id'] for x in instances]
maintenance['data'].update({'instance_ids': instance_ids})
@log.log
def _create_scale_in(self, plugin, context, maintenance):
def post_function(context, vnf_dict):
scaled = int(vnf_dict['attributes'].get('maintenance_scaled', 0))
vnf_dict['attributes']['maintenance_scaled'] = str(scaled + 1)
plugin._update_vnf_post(context, vnf_dict['id'], constants.ACTIVE,
vnf_dict, constants.ACTIVE,
constants.RES_EVT_UPDATE)
instances = self._get_instances(context, vnf_dict)
instance_ids = [x['id'] for x in instances]
maintenance['data'].update({'instance_ids': instance_ids})
self.request(plugin, context, vnf_dict, maintenance)
vnf_dict = maintenance.get('vnf', {})
policy_action = self._create_scale_dict(plugin, context, vnf_dict)
if policy_action:
maintenance.update({'policy_action': policy_action})
context.maintenance_post_function = post_function
@log.log
def _create_prepare_maintenance(self, plugin, context, maintenance):
self._create_planned_maintenance(plugin, context, maintenance)
@log.log
def _create_planned_maintenance(self, plugin, context, maintenance):
def post_function(context, vnf_dict):
migration_type = self._get_constraints(vnf_dict,
key='migration_type',
default='MIGRATE')
maintenance['data'].update({'instance_action': migration_type})
self.request(plugin, context, vnf_dict, maintenance)
vnf_dict = maintenance.get('vnf', {})
instances = self._get_instances(context, vnf_dict)
request_instance_id = maintenance['instance_ids'][0]
selected = None
for instance in instances:
if instance['id'] == request_instance_id:
selected = instance
break
if not selected:
vnfm.InvalidMaintenanceParameter()
migration_type = self._get_constraints(vnf_dict, key='migration_type',
default='MIGRATE')
if migration_type == 'OWN_ACTION':
policy_action = self._create_migrate_dict(context, vnf_dict,
selected)
maintenance.update({'policy_action': policy_action})
context.maintenance_post_function = post_function
else:
post_function(context, vnf_dict)
@log.log
def _create_maintenance_complete(self, plugin, context, maintenance):
def post_function(context, vnf_dict):
vim_res = self.vim_client.get_vim(context, vnf_dict['vim_id'])
scaled = int(vnf_dict['attributes'].get('maintenance_scaled', 0))
if vim_res['vim_type'] == 'openstack':
scaled -= 1
vnf_dict['attributes']['maintenance_scaled'] = str(scaled)
plugin._update_vnf_post(context, vnf_dict['id'],
constants.ACTIVE, vnf_dict,
constants.ACTIVE,
constants.RES_EVT_UPDATE)
if scaled > 0:
scale_out(plugin, context, vnf_dict)
else:
instances = self._get_instances(context, vnf_dict)
instance_ids = [x['id'] for x in instances]
maintenance['data'].update({'instance_ids': instance_ids})
self.request(plugin, context, vnf_dict, maintenance)
def scale_out(plugin, context, vnf_dict):
policy_action = self._create_scale_dict(plugin, context, vnf_dict,
scale_type='out')
context.maintenance_post_function = post_function
plugin._vnf_action.invoke(policy_action['action'],
'execute_action', plugin=plugin,
context=context, vnf_dict=vnf_dict,
args=policy_action['args'])
vnf_dict = maintenance.get('vnf', {})
scaled = vnf_dict.get('attributes', {}).get('maintenance_scaled', 0)
if int(scaled):
policy_action = self._create_scale_dict(plugin, context, vnf_dict,
scale_type='out')
maintenance.update({'policy_action': policy_action})
context.maintenance_post_function = post_function
@log.log
def _create_scale_dict(self, plugin, context, vnf_dict, scale_type='in'):
policy_action, scale_dict = {}, {}
policies = self._get_scaling_policies(plugin, context, vnf_dict)
if not policies:
return
scale_dict['type'] = scale_type
scale_dict['policy'] = policies[0]['name']
policy_action['action'] = 'autoscaling'
policy_action['args'] = {'scale': scale_dict}
return policy_action
@log.log
def _create_migrate_dict(self, context, vnf_dict, instance):
policy_action, heal_dict = {}, {}
heal_dict['vdu_name'] = instance['name']
heal_dict['cause'] = ["Migrate resource '%s' to other host."]
heal_dict['stack_id'] = instance['stack_name']
if 'scaling_group_names' in vnf_dict['attributes']:
sg_names = vnf_dict['attributes']['scaling_group_names']
sg_names = list(jsonutils.loads(sg_names).keys())
heal_dict['heat_tpl'] = '%s_res.yaml' % sg_names[0]
policy_action['action'] = 'vdu_autoheal'
policy_action['args'] = heal_dict
return policy_action
@log.log
def _create_instance_group_list(self, context, vnf_dict, action):
group_id = vnf_dict['attributes'].get('maintenance_group', '')
if not group_id:
return
def get_constraints(data):
maintenance_config = self._get_constraints(vnf_dict)
data['max_impacted_members'] = maintenance_config.get(
'max_impacted_members', 1)
data['recovery_time'] = maintenance_config.get('recovery_time', 60)
params, data = {}, {}
params['url'] = '/instance_group/%s' % group_id
if action == 'update':
data['group_id'] = group_id
data['project_id'] = vnf_dict['tenant_id']
data['group_name'] = 'tacker_nonha_app_group_%s' % vnf_dict['id']
data['anti_affinity_group'] = False
data['max_instances_per_host'] = 0
data['resource_mitigation'] = True
get_constraints(data)
params.update({'data': data})
return [params]
@log.log
def _create_project_instance_list(self, context, vnf_dict, action):
group_id = vnf_dict.get('attributes', {}).get('maintenance_group', '')
if not group_id:
return
params_list = []
url = '/instance'
instances = self._get_instances(context, vnf_dict)
_instances = self._instances.get(vnf_dict['id'], {})
if _instances:
if action == 'update':
instances = [v for v in instances if v not in _instances]
del self._instances[vnf_dict['id']]
else:
instances = [v for v in _instances if v not in instances]
if len(instances) != len(_instances):
del self._instances[vnf_dict['id']]
if action == 'update':
maintenance_configs = self._get_constraints(vnf_dict)
for instance in instances:
params, data = {}, {}
params['url'] = '%s/%s' % (url, instance['id'])
data['project_id'] = instance['project_id']
data['instance_id'] = instance['id']
data['instance_name'] = instance['name']
data['migration_type'] = maintenance_configs.get(
'migration_type', 'MIGRATE')
data['resource_mitigation'] = maintenance_configs.get(
'mitigation_type', True)
data['max_interruption_time'] = maintenance_configs.get(
'max_interruption_time',
cfg.CONF.fenix.max_interruption_time)
data['lead_time'] = maintenance_configs.get(
'lead_time', cfg.CONF.fenix.lead_time)
data['group_id'] = group_id
params.update({'data': data})
params_list.append(params)
elif action == 'delete':
for instance in instances:
params = {}
params['url'] = '%s/%s' % (url, instance['id'])
params_list.append(params)
return params_list
@log.log
def _get_instances(self, context, vnf_dict):
vim_res = self.vim_client.get_vim(context, vnf_dict['vim_id'])
action = '_get_instances_with_%s' % vim_res['vim_type']
if hasattr(self, action):
return getattr(self, action)(context, vnf_dict)
return {}
@log.log
def _get_instances_with_openstack(self, context, vnf_dict):
def get_attrs_with_link(links):
attrs = {}
for link in links:
href, rel = link['href'], link['rel']
if rel == 'self':
words = href.split('/')
attrs['project_id'] = words[5]
attrs['stack_name'] = words[7]
break
return attrs
instances = []
client = self._get_openstack_clients(context, vnf_dict)
resources = client.heat.resources.list(vnf_dict['instance_id'],
nested_depth=2)
for resource in resources:
if resource.resource_type == 'OS::Nova::Server' and \
resource.resource_status != 'DELETE_IN_PROGRESS':
instance = {
'id': resource.physical_resource_id,
'name': resource.resource_name
}
instance.update(get_attrs_with_link(resource.links))
instances.append(instance)
return instances
@log.log
def _get_scaling_policies(self, plugin, context, vnf_dict):
vnf_id = vnf_dict['id']
policies = []
if 'scaling_group_names' in vnf_dict['attributes']:
policies = plugin.get_vnf_policies(
context, vnf_id, filters={'type': constants.POLICY_SCALING})
return policies
@log.log
def _get_constraints(self, vnf, key=None, default=None):
config = vnf.get('attributes', {}).get('config', '{}')
maintenance_config = yaml.safe_load(config).get('maintenance', {})
if key:
return maintenance_config.get(key, default)
return maintenance_config
@log.log
def _get_openstack_clients(self, context, vnf_dict):
vim_res = self.vim_client.get_vim(context, vnf_dict['vim_id'])
region_name = vnf_dict.setdefault('placement_attr', {}).get(
'region_name', None)
client = clients.OpenstackClients(auth_attr=vim_res['vim_auth'],
region_name=region_name)
return client

View File

@ -0,0 +1,51 @@
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: Maintenance VNF with Fenix
metadata:
template_name: tosca-vnfd-maintenance
topology_template:
node_templates:
VDU1:
capabilities:
nfv_compute:
properties:
disk_size: 15 GB
mem_size: 2048 MB
num_cpus: 2
properties:
availability_zone: nova
image: cirros-0.4.0-x86_64-disk
maintenance: true
mgmt_driver: noop
type: tosca.nodes.nfv.VDU.Tacker
CP11:
properties:
anti_spoofing_protection: false
management: true
order: 0
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1
type: tosca.nodes.nfv.CP.Tacker
VL1:
properties:
network_name: net_mgmt
vendor: Tacker
type: tosca.nodes.nfv.VL
policies:
- SP1:
properties:
cooldown: 120
default_instances: 3
increment: 1
max_instances: 3
min_instances: 1
targets:
- VDU1
type: tosca.policies.tacker.Scaling

View File

@ -191,6 +191,13 @@ class BaseTackerTest(base.BaseTestCase):
auth_ses = session.Session(auth=auth, verify=verify) auth_ses = session.Session(auth=auth, verify=verify)
return glance_client.Client(session=auth_ses) return glance_client.Client(session=auth_ses)
@classmethod
def aodh_http_client(cls):
auth_session = cls.get_auth_session()
return SessionClient(session=auth_session,
service_type='alarming',
region_name='RegionOne')
def get_vdu_resource(self, stack_id, res_name): def get_vdu_resource(self, stack_id, res_name):
return self.h_client.resources.get(stack_id, res_name) return self.h_client.resources.get(stack_id, res_name)

View File

@ -0,0 +1,194 @@
# Copyright 2020 Distributed Cloud and Network (DCN)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from datetime import datetime
import time
import yaml
from oslo_serialization import jsonutils
from oslo_utils import uuidutils
from tacker.plugins.common import constants as evt_constants
from tacker.tests import constants
from tacker.tests.functional import base
from tacker.tests.utils import read_file
class VnfTestMaintenanceMonitor(base.BaseTackerTest):
def _test_vnf_tosca_maintenance(self, vnfd_file, vnf_name):
input_yaml = read_file(vnfd_file)
tosca_dict = yaml.safe_load(input_yaml)
tosca_arg = {'vnfd': {'name': vnf_name,
'attributes': {'vnfd': tosca_dict}}}
# Create vnfd with tosca template
vnfd_instance = self.client.create_vnfd(body=tosca_arg)
self.assertIsNotNone(vnfd_instance)
# Create vnf with vnfd_id
vnfd_id = vnfd_instance['vnfd']['id']
vnf_arg = {'vnf': {'vnfd_id': vnfd_id, 'name': vnf_name}}
vnf_instance = self.client.create_vnf(body=vnf_arg)
vnf_id = vnf_instance['vnf']['id']
self.validate_vnf_instance(vnfd_instance, vnf_instance)
def _wait_vnf_active_and_assert_vdu_count(vdu_count, scale_type=None):
self.wait_until_vnf_active(
vnf_id,
constants.VNF_CIRROS_CREATE_TIMEOUT,
constants.ACTIVE_SLEEP_TIME)
vnf = self.client.show_vnf(vnf_id)['vnf']
self.assertEqual(vdu_count, len(jsonutils.loads(
vnf['mgmt_ip_address'])['VDU1']))
def _verify_maintenance_attributes(vnf_dict):
vnf_attrs = vnf_dict.get('attributes', {})
maintenance_vdus = vnf_attrs.get('maintenance', '{}')
maintenance_vdus = jsonutils.loads(maintenance_vdus)
maintenance_url = vnf_attrs.get('maintenance_url', '')
words = maintenance_url.split('/')
self.assertEqual(len(maintenance_vdus.keys()), 2)
self.assertEqual(len(words), 8)
self.assertEqual(words[5], vnf_dict['id'])
self.assertEqual(words[7], vnf_dict['tenant_id'])
maintenance_urls = {}
for vdu, access_key in maintenance_vdus.items():
maintenance_urls[vdu] = maintenance_url + '/' + access_key
return maintenance_urls
def _verify_maintenance_alarm(url, project_id):
aodh_client = self.aodh_http_client()
alarm_query = {
'and': [
{'=': {'project_id': project_id}},
{'=~': {'alarm_actions': url}}]}
# Check alarm instance for MAINTENANCE_ALL
alarm_url = 'v2/query/alarms'
encoded_data = jsonutils.dumps(alarm_query)
encoded_body = jsonutils.dumps({'filter': encoded_data})
resp, response_body = aodh_client.do_request(alarm_url, 'POST',
body=encoded_body)
self.assertEqual(len(response_body), 1)
alarm_dict = response_body[0]
self.assertEqual(url, alarm_dict.get('alarm_actions', [])[0])
return response_body[0]
def _verify_maintenance_actions(vnf_dict, alarm_dict):
tacker_client = self.tacker_http_client()
alarm_url = alarm_dict.get('alarm_actions', [])[0]
tacker_url = '/%s' % alarm_url[alarm_url.find('v1.0'):]
def _request_maintenance_action(state):
alarm_body = _create_alarm_data(vnf_dict, alarm_dict, state)
resp, response_body = tacker_client.do_request(
tacker_url, 'POST', body=alarm_body)
time.sleep(constants.SCALE_SLEEP_TIME)
target_scaled = -1
if state == 'SCALE_IN':
target_scaled = 1
_wait_vnf_active_and_assert_vdu_count(2, scale_type='in')
elif state == 'MAINTENANCE_COMPLETE':
target_scaled = 0
_wait_vnf_active_and_assert_vdu_count(3, scale_type='out')
updated_vnf = self.client.show_vnf(vnf_id)['vnf']
scaled = updated_vnf['attributes'].get('maintenance_scaled',
'-1')
self.assertEqual(int(scaled), target_scaled)
time.sleep(constants.SCALE_WINDOW_SLEEP_TIME)
time.sleep(constants.SCALE_WINDOW_SLEEP_TIME)
_request_maintenance_action('SCALE_IN')
_request_maintenance_action('MAINTENANCE_COMPLETE')
self.verify_vnf_crud_events(
vnf_id, evt_constants.RES_EVT_SCALE,
evt_constants.ACTIVE, cnt=2)
self.verify_vnf_crud_events(
vnf_id, evt_constants.RES_EVT_SCALE,
evt_constants.PENDING_SCALE_OUT, cnt=1)
self.verify_vnf_crud_events(
vnf_id, evt_constants.RES_EVT_SCALE,
evt_constants.PENDING_SCALE_IN, cnt=1)
def _create_alarm_data(vnf_dict, alarm_dict, state):
'''This function creates a raw payload of alarm to trigger Tacker directly.
This function creates a raw payload which Fenix will put
when Fenix process maintenance procedures. Alarm_receiver and
specific steps of Fenix workflow will be tested by sending the raw
to Tacker directly.
'''
utc_time = datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%SZ')
fake_url = 'http://localhost/'
sample_data = {
'alarm_name': alarm_dict['name'],
'alarm_id': alarm_dict['alarm_id'],
'severity': 'low',
'previous': 'alarm',
'current': 'alarm',
'reason': 'Alarm test for Tacker functional test',
'reason_data': {
'type': 'event',
'event': {
'message_id': uuidutils.generate_uuid(),
'event_type': 'maintenance.scheduled',
'generated': utc_time,
'traits': [
['project_id', 1, vnf_dict['tenant_id']],
['allowed_actions', 1, '[]'],
['instance_ids', 1, fake_url],
['reply_url', 1, fake_url],
['state', 1, state],
['session_id', 1, uuidutils.generate_uuid()],
['actions_at', 4, utc_time],
['reply_at', 4, utc_time],
['metadata', 1, '{}']
],
'raw': {},
'message_signature': uuidutils.generate_uuid()
}
}
}
return jsonutils.dumps(sample_data)
_wait_vnf_active_and_assert_vdu_count(3)
urls = _verify_maintenance_attributes(vnf_instance['vnf'])
maintenance_url = urls.get('ALL', '')
project_id = vnf_instance['vnf']['tenant_id']
alarm_dict = _verify_maintenance_alarm(maintenance_url, project_id)
_verify_maintenance_actions(vnf_instance['vnf'], alarm_dict)
try:
self.client.delete_vnf(vnf_id)
except Exception:
assert False, (
'Failed to delete vnf %s after the maintenance test' % vnf_id)
self.addCleanup(self.client.delete_vnfd, vnfd_id)
self.addCleanup(self.wait_until_vnf_delete, vnf_id,
constants.VNF_CIRROS_DELETE_TIMEOUT)
def test_vnf_alarm_maintenance(self):
# instance_maintenance = self._get_instance_maintenance()
self._test_vnf_tosca_maintenance(
'sample-tosca-vnfd-maintenance.yaml',
'maintenance_vnf')

View File

@ -107,6 +107,7 @@ class TestVDU(base.TestCase):
cause=["Unable to reach while monitoring resource: 'VDU1'"]) cause=["Unable to reach while monitoring resource: 'VDU1'"])
self.heal_request_data_obj = heal_vnf_request.HealVnfRequest( self.heal_request_data_obj = heal_vnf_request.HealVnfRequest(
cause='VNF monitoring fails.', cause='VNF monitoring fails.',
stack_id=vnf_dict['instance_id'],
additional_params=[self.additional_paramas_obj]) additional_params=[self.additional_paramas_obj])
self.heal_vdu = vdu.Vdu(self.context, vnf_dict, self.heal_vdu = vdu.Vdu(self.context, vnf_dict,
self.heal_request_data_obj) self.heal_request_data_obj)

View File

@ -32,6 +32,10 @@ class FakeCVNFMonitor(mock.Mock):
pass pass
class FakePlugin(mock.Mock):
pass
class FakeK8SVimClient(mock.Mock): class FakeK8SVimClient(mock.Mock):
pass pass
@ -44,6 +48,8 @@ class TestCVNFMPlugin(db_base.SqlTestCase):
self._mock_vim_client() self._mock_vim_client()
self._stub_get_vim() self._stub_get_vim()
self._mock_vnf_monitor() self._mock_vnf_monitor()
self._mock_vnf_maintenance_monitor()
self._mock_vnf_maintenance_plugin()
self._insert_dummy_vim() self._insert_dummy_vim()
self.vnfm_plugin = plugin.VNFMPlugin() self.vnfm_plugin = plugin.VNFMPlugin()
mock.patch('tacker.db.common_services.common_services_db_plugin.' mock.patch('tacker.db.common_services.common_services_db_plugin.'
@ -108,6 +114,22 @@ class TestCVNFMPlugin(db_base.SqlTestCase):
self._mock( self._mock(
'tacker.vnfm.monitor.VNFMonitor', fake_vnf_monitor) 'tacker.vnfm.monitor.VNFMonitor', fake_vnf_monitor)
def _mock_vnf_maintenance_monitor(self):
self._vnf_maintenance_mon = mock.Mock(wraps=FakeCVNFMonitor())
fake_vnf_maintenance_monitor = mock.Mock()
fake_vnf_maintenance_monitor.return_value = self._vnf_maintenance_mon
self._mock(
'tacker.vnfm.monitor.VNFMaintenanceAlarmMonitor',
fake_vnf_maintenance_monitor)
def _mock_vnf_maintenance_plugin(self):
self._vnf_maintenance_plugin = mock.Mock(wraps=FakePlugin())
fake_vnf_maintenance_plugin = mock.Mock()
fake_vnf_maintenance_plugin.return_value = self._vnf_maintenance_plugin
self._mock(
'tacker.plugins.fenix.FenixPlugin',
fake_vnf_maintenance_plugin)
def _insert_dummy_vnf_template(self): def _insert_dummy_vnf_template(self):
session = self.context.session session = self.context.session
vnf_template = vnfm_db.VNFD( vnf_template = vnfm_db.VNFD(

View File

@ -255,3 +255,41 @@ class TestVNFReservationAlarmMonitor(testtools.TestCase):
response = test_vnf_reservation_monitor.update_vnf_with_reservation( response = test_vnf_reservation_monitor.update_vnf_with_reservation(
self.plugin, self.context, vnf, policy_dict) self.plugin, self.context, vnf, policy_dict)
self.assertEqual(len(response.keys()), 3) self.assertEqual(len(response.keys()), 3)
class TestVNFMaintenanceAlarmMonitor(testtools.TestCase):
def setup(self):
super(TestVNFMaintenanceAlarmMonitor, self).setUp()
def test_process_alarm_for_vnf(self):
vnf = {'id': MOCK_VNF_ID}
trigger = {'params': {'data': {
'alarm_id': MOCK_VNF_ID, 'current': 'alarm'}}}
test_vnf_maintenance_monitor = monitor.VNFMaintenanceAlarmMonitor()
response = test_vnf_maintenance_monitor.process_alarm_for_vnf(
vnf, trigger)
self.assertEqual(response, True)
@mock.patch('tacker.db.common_services.common_services_db_plugin.'
'CommonServicesPluginDb.create_event')
def test_update_vnf_with_alarm(self, mock_db_service):
mock_db_service.return_value = {
'event_type': 'MONITOR',
'resource_id': '9770fa22-747d-426e-9819-057a95cb778c',
'timestamp': '2018-10-30 06:01:45.628162',
'event_details': {'Alarm URL set successfully': {
'start_actions': 'alarm'}},
'resource_state': 'CREATE',
'id': '4583',
'resource_type': 'vnf'}
vnf = {
'id': MOCK_VNF_ID,
'tenant_id': 'ad7ebc56538745a08ef7c5e97f8bd437',
'status': 'insufficient_data'}
vdu_names = ['VDU1']
test_vnf_maintenance_monitor = monitor.VNFMaintenanceAlarmMonitor()
response = test_vnf_maintenance_monitor.update_vnf_with_maintenance(
vnf, vdu_names)
result_keys = len(response) + len(response.get('vdus', {}))
self.assertEqual(result_keys, 4)

View File

@ -49,7 +49,12 @@ class FakeDriverManager(mock.Mock):
class FakeVNFMonitor(mock.Mock): class FakeVNFMonitor(mock.Mock):
pass def update_vnf_with_maintenance(self, vnf_dict, maintenance_vdus):
url = 'http://local:9890/v1.0/vnfs/%s/maintenance/%s' % (
vnf_dict['id'], vnf_dict['tenant_id'])
return {'url': url,
'vdus': {'ALL': 'ad7ebc56',
'VDU1': '538745a0'}}
class FakeGreenPool(mock.Mock): class FakeGreenPool(mock.Mock):
@ -60,6 +65,10 @@ class FakeVimClient(mock.Mock):
pass pass
class FakePlugin(mock.Mock):
pass
class FakeException(Exception): class FakeException(Exception):
pass pass
@ -143,6 +152,8 @@ class TestVNFMPlugin(db_base.SqlTestCase):
self._mock_vnf_monitor() self._mock_vnf_monitor()
self._mock_vnf_alarm_monitor() self._mock_vnf_alarm_monitor()
self._mock_vnf_reservation_monitor() self._mock_vnf_reservation_monitor()
self._mock_vnf_maintenance_monitor()
self._mock_vnf_maintenance_plugin()
self._insert_dummy_vim() self._insert_dummy_vim()
self.vnfm_plugin = plugin.VNFMPlugin() self.vnfm_plugin = plugin.VNFMPlugin()
mock.patch('tacker.db.common_services.common_services_db_plugin.' mock.patch('tacker.db.common_services.common_services_db_plugin.'
@ -219,6 +230,22 @@ class TestVNFMPlugin(db_base.SqlTestCase):
'tacker.vnfm.monitor.VNFReservationAlarmMonitor', 'tacker.vnfm.monitor.VNFReservationAlarmMonitor',
fake_vnf_reservation_monitor) fake_vnf_reservation_monitor)
def _mock_vnf_maintenance_monitor(self):
self._vnf_maintenance_mon = mock.Mock(wraps=FakeVNFMonitor())
fake_vnf_maintenance_monitor = mock.Mock()
fake_vnf_maintenance_monitor.return_value = self._vnf_maintenance_mon
self._mock(
'tacker.vnfm.monitor.VNFMaintenanceAlarmMonitor',
fake_vnf_maintenance_monitor)
def _mock_vnf_maintenance_plugin(self):
self._vnf_maintenance_plugin = mock.Mock(wraps=FakePlugin())
fake_vnf_maintenance_plugin = mock.Mock()
fake_vnf_maintenance_plugin.return_value = self._vnf_maintenance_plugin
self._mock(
'tacker.plugins.fenix.FenixPlugin',
fake_vnf_maintenance_plugin)
def _insert_dummy_vnf_template(self): def _insert_dummy_vnf_template(self):
session = self.context.session session = self.context.session
vnf_template = vnfm_db.VNFD( vnf_template = vnfm_db.VNFD(
@ -1108,6 +1135,7 @@ class TestVNFMPlugin(db_base.SqlTestCase):
parameter='VDU1', parameter='VDU1',
cause=["Unable to reach while monitoring resource: 'VDU1'"]) cause=["Unable to reach while monitoring resource: 'VDU1'"])
heal_request_data_obj = heal_vnf_request.HealVnfRequest( heal_request_data_obj = heal_vnf_request.HealVnfRequest(
stack_id=dummy_device_obj['instance_id'],
cause='VNF monitoring fails.', cause='VNF monitoring fails.',
additional_params=[additional_params_obj]) additional_params=[additional_params_obj])
result = self.vnfm_plugin.heal_vnf(self.context, result = self.vnfm_plugin.heal_vnf(self.context,

View File

@ -278,6 +278,10 @@ node_types:
type: tosca.datatypes.tacker.VduReservationMetadata type: tosca.datatypes.tacker.VduReservationMetadata
required: false required: false
maintenance:
type: boolean
required: false
tosca.nodes.nfv.CP.Tacker: tosca.nodes.nfv.CP.Tacker:
derived_from: tosca.nodes.nfv.CP derived_from: tosca.nodes.nfv.CP
properties: properties:

View File

@ -19,6 +19,7 @@ import yaml
from collections import OrderedDict from collections import OrderedDict
from oslo_log import log as logging from oslo_log import log as logging
from oslo_serialization import jsonutils
from oslo_utils import uuidutils from oslo_utils import uuidutils
from tacker._i18n import _ from tacker._i18n import _
@ -93,11 +94,13 @@ deletenodes = (MONITORING, FAILURE, PLACEMENT)
HEAT_RESOURCE_MAP = { HEAT_RESOURCE_MAP = {
"flavor": "OS::Nova::Flavor", "flavor": "OS::Nova::Flavor",
"image": "OS::Glance::WebImage" "image": "OS::Glance::WebImage",
"maintenance": "OS::Aodh::EventAlarm"
} }
SCALE_GROUP_RESOURCE = "OS::Heat::AutoScalingGroup" SCALE_GROUP_RESOURCE = "OS::Heat::AutoScalingGroup"
SCALE_POLICY_RESOURCE = "OS::Heat::ScalingPolicy" SCALE_POLICY_RESOURCE = "OS::Heat::ScalingPolicy"
PLACEMENT_POLICY_RESOURCE = "OS::Nova::ServerGroup"
@log.log @log.log
@ -258,6 +261,12 @@ def pre_process_alarm_resources(vnf, template, vdu_metadata, unique_id=None):
'before_end_actions': { 'before_end_actions': {
'event_type': 'lease.event.before_end_lease'}, 'event_type': 'lease.event.before_end_lease'},
'end_actions': {'event_type': 'lease.event.end_lease'}} 'end_actions': {'event_type': 'lease.event.end_lease'}}
maintenance_actions = _process_alarm_actions_for_maintenance(vnf)
if maintenance_actions:
alarm_actions.update(maintenance_actions)
alarm_resources['event_types'] = {}
alarm_resources['event_types'].update({
'ALL_maintenance': {'event_type': 'maintenance.scheduled'}})
alarm_resources['query_metadata'] = query_metadata alarm_resources['query_metadata'] = query_metadata
alarm_resources['alarm_actions'] = alarm_actions alarm_resources['alarm_actions'] = alarm_actions
return alarm_resources return alarm_resources
@ -337,6 +346,22 @@ def _process_alarm_actions_for_reservation(vnf, policy):
return alarm_actions return alarm_actions
def _process_alarm_actions_for_maintenance(vnf):
# process alarm url here
alarm_actions = dict()
maintenance_props = vnf['attributes'].get('maintenance', '{}')
maintenance_props = jsonutils.loads(maintenance_props)
maintenance_url = vnf['attributes'].get('maintenance_url', '')
for vdu, access_key in maintenance_props.items():
action = '%s_maintenance' % vdu
alarm_url = '%s/%s' % (maintenance_url.rstrip('/'), access_key)
if alarm_url:
LOG.debug('Alarm url in heat %s', alarm_url)
alarm_actions[action] = dict()
alarm_actions[action]['alarm_actions'] = [alarm_url]
return alarm_actions
def get_volumes(template): def get_volumes(template):
volume_dict = dict() volume_dict = dict()
node_tpl = template['topology_template']['node_templates'] node_tpl = template['topology_template']['node_templates']
@ -423,6 +448,8 @@ def add_resources_tpl(heat_dict, hot_res_tpl):
"properties": {} "properties": {}
} }
if res == "maintenance":
continue
for prop, val in (vdu_dict).items(): for prop, val in (vdu_dict).items():
# change from 'get_input' to 'get_param' to meet HOT template # change from 'get_input' to 'get_param' to meet HOT template
if isinstance(val, dict): if isinstance(val, dict):
@ -517,6 +544,7 @@ def post_process_heat_template(heat_tpl, mgmt_ports, metadata,
if heat_dict['resources'].get(vdu_name): if heat_dict['resources'].get(vdu_name):
heat_dict['resources'][vdu_name]['properties']['metadata'] =\ heat_dict['resources'][vdu_name]['properties']['metadata'] =\
metadata_dict metadata_dict
add_resources_tpl(heat_dict, res_tpl)
query_metadata = alarm_resources.get('query_metadata') query_metadata = alarm_resources.get('query_metadata')
alarm_actions = alarm_resources.get('alarm_actions') alarm_actions = alarm_resources.get('alarm_actions')
@ -532,16 +560,14 @@ def post_process_heat_template(heat_tpl, mgmt_ports, metadata,
if alarm_actions: if alarm_actions:
for trigger_name, alarm_actions_dict in alarm_actions.items(): for trigger_name, alarm_actions_dict in alarm_actions.items():
if heat_dict['resources'].get(trigger_name): if heat_dict['resources'].get(trigger_name):
heat_dict['resources'][trigger_name]['properties']. \ heat_dict['resources'][trigger_name]['properties'].update(
update(alarm_actions_dict) alarm_actions_dict)
if event_types: if event_types:
for trigger_name, event_type in event_types.items(): for trigger_name, event_type in event_types.items():
if heat_dict['resources'].get(trigger_name): if heat_dict['resources'].get(trigger_name):
heat_dict['resources'][trigger_name]['properties'].update( heat_dict['resources'][trigger_name]['properties'].update(
event_type) event_type)
add_resources_tpl(heat_dict, res_tpl)
for res in heat_dict["resources"].values(): for res in heat_dict["resources"].values():
if not res['type'] == HEAT_SOFTWARE_CONFIG: if not res['type'] == HEAT_SOFTWARE_CONFIG:
continue continue
@ -1038,6 +1064,15 @@ def findvdus(template):
return vdus return vdus
def find_maintenance_vdus(template):
maintenance_vdu_names = list()
vdus = findvdus(template)
for nt in vdus:
if nt.get_properties().get('maintenance'):
maintenance_vdu_names.append(nt.name)
return maintenance_vdu_names
def get_flavor_dict(template, flavor_extra_input=None): def get_flavor_dict(template, flavor_extra_input=None):
flavor_dict = {} flavor_dict = {}
vdus = findvdus(template) vdus = findvdus(template)
@ -1152,6 +1187,27 @@ def get_resources_dict(template, flavor_extra_input=None):
return res_dict return res_dict
def add_maintenance_resources(template, res_tpl):
res_dict = {}
maintenance_vdus = find_maintenance_vdus(template)
maintenance_vdus.append('ALL')
if maintenance_vdus:
for vdu_name in maintenance_vdus:
res_dict[vdu_name] = {}
res_tpl['maintenance'] = res_dict
@log.log
def get_policy_dict(template, policy_type):
policy_dict = dict()
for policy in template.policies:
if (policy.type_definition.is_derived_from(policy_type)):
policy_attrs = dict()
policy_attrs['targets'] = policy.targets
policy_dict[policy.name] = policy_attrs
return policy_dict
@log.log @log.log
def get_scaling_policy(template): def get_scaling_policy(template):
scaling_policy_names = list() scaling_policy_names = list()

View File

@ -427,12 +427,24 @@ class OpenStack(abstract_driver.VnfAbstractDriver,
@log.log @log.log
def heal_wait(self, plugin, context, vnf_dict, auth_attr, def heal_wait(self, plugin, context, vnf_dict, auth_attr,
region_name=None): region_name=None):
stack = self._wait_until_stack_ready(vnf_dict['instance_id'], region_name = vnf_dict.get('placement_attr', {}).get(
'region_name', None)
heatclient = hc.HeatClient(auth_attr, region_name)
stack_id = vnf_dict.get('heal_stack_id', vnf_dict['instance_id'])
stack = self._wait_until_stack_ready(stack_id,
auth_attr, infra_cnst.STACK_UPDATE_IN_PROGRESS, auth_attr, infra_cnst.STACK_UPDATE_IN_PROGRESS,
infra_cnst.STACK_UPDATE_COMPLETE, infra_cnst.STACK_UPDATE_COMPLETE,
vnfm.VNFHealWaitFailed, region_name=region_name) vnfm.VNFHealWaitFailed, region_name=region_name)
# scaling enabled
mgmt_ips = self._find_mgmt_ips(stack.outputs) if vnf_dict['attributes'].get('scaling_group_names'):
group_names = jsonutils.loads(
vnf_dict['attributes'].get('scaling_group_names')).values()
mgmt_ips = self._find_mgmt_ips_from_groups(heatclient,
vnf_dict['instance_id'],
group_names)
else:
mgmt_ips = self._find_mgmt_ips(stack.outputs)
if mgmt_ips: if mgmt_ips:
vnf_dict['mgmt_ip_address'] = jsonutils.dump_as_bytes(mgmt_ips) vnf_dict['mgmt_ip_address'] = jsonutils.dump_as_bytes(mgmt_ips)
@ -462,10 +474,13 @@ class OpenStack(abstract_driver.VnfAbstractDriver,
return mgmt_ips return mgmt_ips
mgmt_ips = {} mgmt_ips = {}
ignore_status = ['DELETE_COMPLETE', 'DELETE_IN_PROGRESS']
for group_name in group_names: for group_name in group_names:
# Get scale group # Get scale group
grp = heat_client.resource_get(instance_id, group_name) grp = heat_client.resource_get(instance_id, group_name)
for rsc in heat_client.resource_get_list(grp.physical_resource_id): for rsc in heat_client.resource_get_list(grp.physical_resource_id):
if rsc.resource_status in ignore_status:
continue
# Get list of resources in scale group # Get list of resources in scale group
scale_rsc = heat_client.resource_get(grp.physical_resource_id, scale_rsc = heat_client.resource_get(grp.physical_resource_id,
rsc.resource_name) rsc.resource_name)

View File

@ -338,6 +338,10 @@ class TOSCAToHOT(object):
heat_template_yaml, scaling_policy_names) heat_template_yaml, scaling_policy_names)
self.vnf['attributes']['scaling_group_names'] =\ self.vnf['attributes']['scaling_group_names'] =\
jsonutils.dump_as_bytes(scaling_group_dict) jsonutils.dump_as_bytes(scaling_group_dict)
if self.vnf['attributes'].get('maintenance', None):
toscautils.add_maintenance_resources(tosca, res_tpl)
heat_template_yaml = toscautils.post_process_heat_template( heat_template_yaml = toscautils.post_process_heat_template(
heat_template_yaml, mgmt_ports, metadata, alarm_resources, heat_template_yaml, mgmt_ports, metadata, alarm_resources,
res_tpl, block_storage_details, self.unsupported_props, res_tpl, block_storage_details, self.unsupported_props,

View File

@ -33,6 +33,7 @@ class Vdu(object):
self.context = context self.context = context
self.vnf_dict = vnf_dict self.vnf_dict = vnf_dict
self.heal_request_data_obj = heal_request_data_obj self.heal_request_data_obj = heal_request_data_obj
self.stack_id = self.heal_request_data_obj.stack_id
vim_id = self.vnf_dict['vim_id'] vim_id = self.vnf_dict['vim_id']
vim_res = vim_client.VimClient().get_vim(context, vim_id) vim_res = vim_client.VimClient().get_vim(context, vim_id)
placement_attr = vnf_dict.get('placement_attr', {}) placement_attr = vnf_dict.get('placement_attr', {})
@ -53,15 +54,15 @@ class Vdu(object):
additional_params = self.heal_request_data_obj.additional_params additional_params = self.heal_request_data_obj.additional_params
for additional_param in additional_params: for additional_param in additional_params:
resource_name = additional_param.parameter resource_name = additional_param.parameter
res_status = self._get_resource_status( res_status = self._get_resource_status(self.stack_id,
self.vnf_dict['instance_id'], resource_name) resource_name)
if res_status != 'CHECK_FAILED': if res_status != 'CHECK_FAILED':
self.heat_client.resource_mark_unhealthy( self.heat_client.resource_mark_unhealthy(
stack_id=self.vnf_dict['instance_id'], stack_id=self.stack_id,
resource_name=resource_name, mark_unhealthy=True, resource_name=resource_name, mark_unhealthy=True,
resource_status_reason=additional_param.cause) resource_status_reason=additional_param.cause)
LOG.debug("Heat stack '%s' resource '%s' marked as " LOG.debug("Heat stack '%s' resource '%s' marked as "
"unhealthy", self.vnf_dict['instance_id'], "unhealthy", self.stack_id,
resource_name) resource_name)
evt_details = (("HealVnfRequest invoked to mark resource " evt_details = (("HealVnfRequest invoked to mark resource "
"'%s' to unhealthy.") % resource_name) "'%s' to unhealthy.") % resource_name)
@ -70,7 +71,7 @@ class Vdu(object):
evt_details) evt_details)
else: else:
LOG.debug("Heat stack '%s' resource '%s' already mark " LOG.debug("Heat stack '%s' resource '%s' already mark "
"unhealthy.", self.vnf_dict['instance_id'], "unhealthy.", self.stack_id,
resource_name) resource_name)
def heal_vdu(self): def heal_vdu(self):
@ -81,11 +82,11 @@ class Vdu(object):
# Mark all the resources as unhealthy # Mark all the resources as unhealthy
self._resource_mark_unhealthy() self._resource_mark_unhealthy()
self.heat_client.update(stack_id=self.vnf_dict['instance_id'], self.heat_client.update(stack_id=self.stack_id,
existing=True) existing=True)
LOG.debug("Heat stack '%s' update initiated to revive " LOG.debug("Heat stack '%s' update initiated to revive "
"unhealthy resources.", self.vnf_dict['instance_id']) "unhealthy resources.", self.stack_id)
evt_details = (("HealVnfRequest invoked to update the stack " evt_details = (("HealVnfRequest invoked to update the stack "
"'%s'") % self.vnf_dict['instance_id']) "'%s'") % self.stack_id)
vnfm_utils.log_events(self.context, self.vnf_dict, vnfm_utils.log_events(self.context, self.vnf_dict,
constants.RES_EVT_HEAL, evt_details) constants.RES_EVT_HEAL, evt_details)

View File

@ -17,6 +17,8 @@
import ast import ast
import copy import copy
import inspect import inspect
import random
import string
import threading import threading
import time import time
@ -427,3 +429,40 @@ class VNFReservationAlarmMonitor(VNFAlarmMonitor):
alarm_dict['status'] = params['data'].get('current') alarm_dict['status'] = params['data'].get('current')
driver = 'ceilometer' driver = 'ceilometer'
return self.process_alarm(driver, vnf, alarm_dict) return self.process_alarm(driver, vnf, alarm_dict)
class VNFMaintenanceAlarmMonitor(VNFAlarmMonitor):
"""VNF Maintenance Alarm monitor"""
def update_vnf_with_maintenance(self, vnf, vdu_names):
maintenance = dict()
vdus = dict()
params = dict()
params['vnf_id'] = vnf['id']
params['mon_policy_name'] = 'maintenance'
params['mon_policy_action'] = vnf['tenant_id']
driver = 'ceilometer'
url = self.call_alarm_url(driver, vnf, params)
maintenance['url'] = url[:url.rindex('/')]
vdu_names.append('ALL')
for vdu in vdu_names:
access_key = ''.join(
random.SystemRandom().choice(
string.ascii_lowercase + string.digits)
for _ in range(8))
vdus[vdu] = access_key
maintenance.update({'vdus': vdus})
details = "Alarm URL set successfully: %s" % maintenance['url']
vnfm_utils.log_events(t_context.get_admin_context(), vnf,
constants.RES_EVT_MONITOR, details)
return maintenance
def process_alarm_for_vnf(self, vnf, trigger):
"""call in plugin"""
params = trigger['params']
alarm_dict = dict()
alarm_dict['alarm_id'] = params['data'].get('alarm_id')
alarm_dict['status'] = params['data'].get('current')
driver = 'ceilometer'
return self.process_alarm(driver, vnf, alarm_dict)

View File

@ -21,6 +21,7 @@ import yaml
import eventlet import eventlet
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log as logging from oslo_log import log as logging
from oslo_serialization import jsonutils
from oslo_utils import excutils from oslo_utils import excutils
from oslo_utils import uuidutils from oslo_utils import uuidutils
from toscaparser.tosca_template import ToscaTemplate from toscaparser.tosca_template import ToscaTemplate
@ -34,6 +35,7 @@ from tacker import context as t_context
from tacker.db.vnfm import vnfm_db from tacker.db.vnfm import vnfm_db
from tacker.extensions import vnfm from tacker.extensions import vnfm
from tacker.plugins.common import constants from tacker.plugins.common import constants
from tacker.plugins import fenix
from tacker.tosca import utils as toscautils from tacker.tosca import utils as toscautils
from tacker.vnfm.mgmt_drivers import constants as mgmt_constants from tacker.vnfm.mgmt_drivers import constants as mgmt_constants
from tacker.vnfm import monitor from tacker.vnfm import monitor
@ -147,7 +149,9 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
self._vnf_monitor = monitor.VNFMonitor(self.boot_wait) self._vnf_monitor = monitor.VNFMonitor(self.boot_wait)
self._vnf_alarm_monitor = monitor.VNFAlarmMonitor() self._vnf_alarm_monitor = monitor.VNFAlarmMonitor()
self._vnf_reservation_monitor = monitor.VNFReservationAlarmMonitor() self._vnf_reservation_monitor = monitor.VNFReservationAlarmMonitor()
self._vnf_maintenance_monitor = monitor.VNFMaintenanceAlarmMonitor()
self._vnf_app_monitor = monitor.VNFAppMonitor() self._vnf_app_monitor = monitor.VNFAppMonitor()
self._vnf_maintenance_plugin = fenix.FenixPlugin()
self._init_monitoring() self._init_monitoring()
def _init_monitoring(self): def _init_monitoring(self):
@ -258,6 +262,13 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
vnfd_dict = yaml.safe_load(vnfd_yaml) vnfd_dict = yaml.safe_load(vnfd_yaml)
if not (vnfd_dict and vnfd_dict.get('tosca_definitions_version')): if not (vnfd_dict and vnfd_dict.get('tosca_definitions_version')):
return return
try:
toscautils.updateimports(vnfd_dict)
tosca_vnfd = ToscaTemplate(a_file=False,
yaml_dict_tpl=vnfd_dict)
except Exception as e:
LOG.exception("tosca-parser error: %s", str(e))
raise vnfm.ToscaParserFailed(error_msg_details=str(e))
polices = vnfd_dict['topology_template'].get('policies', []) polices = vnfd_dict['topology_template'].get('policies', [])
for policy_dict in polices: for policy_dict in polices:
name, policy = list(policy_dict.items())[0] name, policy = list(policy_dict.items())[0]
@ -273,6 +284,13 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
self, context, vnf_dict, policy) self, context, vnf_dict, policy)
vnf_dict['attributes']['reservation_policy'] = vnf_dict['id'] vnf_dict['attributes']['reservation_policy'] = vnf_dict['id']
vnf_dict['attributes'].update(alarm_url) vnf_dict['attributes'].update(alarm_url)
maintenance_vdus = toscautils.find_maintenance_vdus(tosca_vnfd)
maintenance = \
self._vnf_maintenance_monitor.update_vnf_with_maintenance(
vnf_dict, maintenance_vdus)
vnf_dict['attributes'].update({
'maintenance': jsonutils.dumps(maintenance['vdus'])})
vnf_dict['attributes']['maintenance_url'] = maintenance['url']
def add_vnf_to_appmonitor(self, context, vnf_dict): def add_vnf_to_appmonitor(self, context, vnf_dict):
appmonitor = self._vnf_app_monitor.create_app_dict(context, vnf_dict) appmonitor = self._vnf_app_monitor.create_app_dict(context, vnf_dict)
@ -281,6 +299,8 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
def config_vnf(self, context, vnf_dict): def config_vnf(self, context, vnf_dict):
config = vnf_dict['attributes'].get('config') config = vnf_dict['attributes'].get('config')
if not config: if not config:
self._vnf_maintenance_plugin.create_vnf_constraints(self, context,
vnf_dict)
return return
if isinstance(config, str): if isinstance(config, str):
# TODO(dkushwaha) remove this load once db supports storing # TODO(dkushwaha) remove this load once db supports storing
@ -367,6 +387,7 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
if driver_name == 'openstack': if driver_name == 'openstack':
self.mgmt_create_pre(context, vnf_dict) self.mgmt_create_pre(context, vnf_dict)
self.add_alarm_url_to_vnf(context, vnf_dict) self.add_alarm_url_to_vnf(context, vnf_dict)
vnf_dict['attributes']['maintenance_group'] = uuidutils.generate_uuid()
try: try:
instance_id = self._vnf_manager.invoke( instance_id = self._vnf_manager.invoke(
@ -431,9 +452,9 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
if 'app_monitoring_policy' in vnf_dict['attributes']: if 'app_monitoring_policy' in vnf_dict['attributes']:
self.add_vnf_to_appmonitor(context, vnf_dict) self.add_vnf_to_appmonitor(context, vnf_dict)
if vnf_dict['status'] is not constants.ERROR: if vnf_dict['status'] is not constants.ERROR:
self.add_vnf_to_monitor(context, vnf_dict) self.add_vnf_to_monitor(context, vnf_dict)
self.config_vnf(context, vnf_dict) self.config_vnf(context, vnf_dict)
self.spawn_n(create_vnf_wait) self.spawn_n(create_vnf_wait)
return vnf_dict return vnf_dict
@ -466,15 +487,18 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
with excutils.save_and_reraise_exception(): with excutils.save_and_reraise_exception():
new_status = constants.ERROR new_status = constants.ERROR
self._vnf_monitor.delete_hosting_vnf(vnf_dict['id']) self._vnf_monitor.delete_hosting_vnf(vnf_dict['id'])
self._vnf_maintenance_plugin.post(context, vnf_dict)
self.set_vnf_error_status_reason(context, vnf_dict['id'], self.set_vnf_error_status_reason(context, vnf_dict['id'],
six.text_type(e)) six.text_type(e))
except exceptions.MgmtDriverException as e: except exceptions.MgmtDriverException as e:
LOG.error('VNF configuration failed') LOG.error('VNF configuration failed')
new_status = constants.ERROR new_status = constants.ERROR
self._vnf_monitor.delete_hosting_vnf(vnf_dict['id']) self._vnf_monitor.delete_hosting_vnf(vnf_dict['id'])
self._vnf_maintenance_plugin.post(context, vnf_dict)
self.set_vnf_error_status_reason(context, vnf_dict['id'], self.set_vnf_error_status_reason(context, vnf_dict['id'],
six.text_type(e)) six.text_type(e))
del vnf_dict['heal_stack_id']
vnf_dict['status'] = new_status vnf_dict['status'] = new_status
self.mgmt_update_post(context, vnf_dict) self.mgmt_update_post(context, vnf_dict)
@ -482,6 +506,9 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
evt_details = ("Ends the heal vnf request for VNF '%s'" % evt_details = ("Ends the heal vnf request for VNF '%s'" %
vnf_dict['id']) vnf_dict['id'])
self._vnf_monitor.update_hosting_vnf(vnf_dict, evt_details) self._vnf_monitor.update_hosting_vnf(vnf_dict, evt_details)
self._vnf_maintenance_plugin.update_vnf_instances(self, context,
vnf_dict)
self._vnf_maintenance_plugin.post(context, vnf_dict)
# _update_vnf_post() method updates vnf_status and mgmt_ip_address # _update_vnf_post() method updates vnf_status and mgmt_ip_address
self._update_vnf_post(context, vnf_dict['id'], self._update_vnf_post(context, vnf_dict['id'],
new_status, vnf_dict, new_status, vnf_dict,
@ -521,6 +548,8 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
self._update_vnf_post(context, vnf_dict['id'], new_status, self._update_vnf_post(context, vnf_dict['id'], new_status,
vnf_dict, constants.PENDING_UPDATE, vnf_dict, constants.PENDING_UPDATE,
constants.RES_EVT_UPDATE) constants.RES_EVT_UPDATE)
self._vnf_maintenance_plugin.create_vnf_constraints(self, context,
vnf_dict)
def update_vnf(self, context, vnf_id, vnf): def update_vnf(self, context, vnf_id, vnf):
vnf_attributes = vnf['vnf']['attributes'] vnf_attributes = vnf['vnf']['attributes']
@ -591,6 +620,9 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
self._vnf_monitor.update_hosting_vnf(vnf_dict, evt_details) self._vnf_monitor.update_hosting_vnf(vnf_dict, evt_details)
try: try:
vnf_dict['heal_stack_id'] = heal_request_data_obj.stack_id
self._vnf_maintenance_plugin.project_instance_pre(context,
vnf_dict)
self.mgmt_update_pre(context, vnf_dict) self.mgmt_update_pre(context, vnf_dict)
self._vnf_manager.invoke( self._vnf_manager.invoke(
driver_name, 'heal_vdu', plugin=self, driver_name, 'heal_vdu', plugin=self,
@ -604,6 +636,7 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
vnf_dict['id'], vnf_dict['id'],
six.text_type(e)) six.text_type(e))
self.mgmt_update_post(context, vnf_dict) self.mgmt_update_post(context, vnf_dict)
self._vnf_maintenance_plugin.post(context, vnf_dict)
self._update_vnf_post(context, vnf_id, self._update_vnf_post(context, vnf_id,
constants.ERROR, constants.ERROR,
vnf_dict, constants.PENDING_HEAL, vnf_dict, constants.PENDING_HEAL,
@ -637,6 +670,8 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
self.set_vnf_error_status_reason(context, vnf_dict['id'], self.set_vnf_error_status_reason(context, vnf_dict['id'],
vnf_dict['error_reason']) vnf_dict['error_reason'])
self._vnf_maintenance_plugin.delete_vnf_constraints(self, context,
vnf_dict)
self.mgmt_delete_post(context, vnf_dict) self.mgmt_delete_post(context, vnf_dict)
self._delete_vnf_post(context, vnf_dict, e) self._delete_vnf_post(context, vnf_dict, e)
@ -654,6 +689,8 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
mgmt_constants.KEY_KWARGS: {'vnf': vnf_dict}, mgmt_constants.KEY_KWARGS: {'vnf': vnf_dict},
} }
try: try:
self._vnf_maintenance_plugin.project_instance_pre(context,
vnf_dict)
self.mgmt_delete_pre(context, vnf_dict) self.mgmt_delete_pre(context, vnf_dict)
self.mgmt_call(context, vnf_dict, kwargs) self.mgmt_call(context, vnf_dict, kwargs)
if instance_id: if instance_id:
@ -737,6 +774,8 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
LOG.debug("Policy %(policy)s vnf is at %(status)s", LOG.debug("Policy %(policy)s vnf is at %(status)s",
{'policy': policy['name'], {'policy': policy['name'],
'status': status}) 'status': status})
self._vnf_maintenance_plugin.project_instance_pre(context,
result)
return result return result
# post # post
@ -750,6 +789,10 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
LOG.debug("Policy %(policy)s vnf is at %(status)s", LOG.debug("Policy %(policy)s vnf is at %(status)s",
{'policy': policy['name'], {'policy': policy['name'],
'status': new_status}) 'status': new_status})
action = 'delete' if policy['action'] == 'in' else 'update'
self._vnf_maintenance_plugin.update_vnf_instances(self, context,
result,
action=action)
return result return result
# action # action
@ -1040,3 +1083,20 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
else: else:
raise vnfm.VNFInactive(vnf_id=vnf_id, raise vnfm.VNFInactive(vnf_id=vnf_id,
message=_(' Cannot fetch details')) message=_(' Cannot fetch details'))
def create_vnf_maintenance(self, context, vnf_id, maintenance):
_maintenance = self._vnf_maintenance_plugin.validate_maintenance(
maintenance.copy())
vnf = self.get_vnf(context, vnf_id)
_maintenance['vnf'] = vnf
self._vnf_maintenance_plugin.handle_maintenance(
self, context, _maintenance)
policy_action = _maintenance.get('policy_action', '')
if policy_action:
self._vnf_action.invoke(
policy_action['action'], 'execute_action', plugin=self,
context=context, vnf_dict=vnf, args=policy_action['args'])
else:
self._vnf_maintenance_plugin.request(self, context, vnf,
_maintenance)
return maintenance['maintenance']

View File

@ -34,6 +34,9 @@ class VNFActionVduAutoheal(abstract_action.AbstractPolicyAction):
def execute_action(self, plugin, context, vnf_dict, args): def execute_action(self, plugin, context, vnf_dict, args):
vdu_name = args.get('vdu_name') vdu_name = args.get('vdu_name')
stack_id = args.get('stack_id', vnf_dict['instance_id'])
heat_tpl = args.get('heat_tpl', 'heat_template')
cause = args.get('cause', [])
if vdu_name is None: if vdu_name is None:
LOG.error("VDU resource of vnf '%s' is not present for " LOG.error("VDU resource of vnf '%s' is not present for "
"autoheal." % vnf_dict['id']) "autoheal." % vnf_dict['id'])
@ -46,7 +49,7 @@ class VNFActionVduAutoheal(abstract_action.AbstractPolicyAction):
""" """
resource_list = [vdu_name] resource_list = [vdu_name]
heat_template = yaml.safe_load(vnf_dict['attributes'].get( heat_template = yaml.safe_load(vnf_dict['attributes'].get(
'heat_template')) heat_tpl))
vdu_resources = heat_template['resources'].get(vdu_name) vdu_resources = heat_template['resources'].get(vdu_name)
cp_resources = vdu_resources['properties'].get('networks') cp_resources = vdu_resources['properties'].get('networks')
for resource in cp_resources: for resource in cp_resources:
@ -54,17 +57,18 @@ class VNFActionVduAutoheal(abstract_action.AbstractPolicyAction):
return resource_list return resource_list
if not cause or type(cause) is not list:
cause = ["Unable to reach while monitoring resource: '%s'",
"Failed to monitor VDU resource '%s'"]
resource_list = _get_vdu_resources() resource_list = _get_vdu_resources()
additional_params = [] additional_params = []
for resource in resource_list: for resource in resource_list:
additional_paramas_obj = objects.HealVnfAdditionalParams( additional_params_obj = objects.HealVnfAdditionalParams(
parameter=resource, parameter=resource, cause=[cause[0] % resource])
cause=["Unable to reach while monitoring resource: '%s'" % additional_params.append(additional_params_obj)
resource])
additional_params.append(additional_paramas_obj)
heal_request_data_obj = objects.HealVnfRequest( heal_request_data_obj = objects.HealVnfRequest(
cause=("Failed to monitor VDU resource '%s'" % vdu_name), stack_id=stack_id,
additional_params=additional_params) cause=(cause[-1] % vdu_name), additional_params=additional_params)
plugin.heal_vnf(context, vnf_dict['id'], heal_request_data_obj) plugin.heal_vnf(context, vnf_dict['id'], heal_request_data_obj)