Browse Source

Implementation Fenix plugin in Tacker

Add fenix plugin for host maintenance.
This feature creates plugin for fenix, create_vnf_maintenance() in VNFM and
VNFMaintenanceAlarmMonitor to create alarm for Fenix. And the feature modifies
alarm_receiver and CRUD in VNFM.

After this feature, all VNF has ALL_MAINTENANCE resource to interacts
with Fenix plugin and [VDU_NAME]_MAINTENANCE if VDU has maintenance property.
[VDU_NAME]_MAINTENANCE will use to perform VNF software modification.

Currently, the plugin can perform CRUD constraints for maintenance,
scale in/out and migration for MIGRATE and LIVE_MIGRATE. The feature has
functions for OWN_ACTION with modified healing, but it not works based on
default vnf workflow in Fenix. And The feature doesn't support server_group
and related with HA like switch over because of unsupported in Tacker.
So these features will be enhance after adding required.

Co-Authored-By: Hyunsik Yang <yangun@dcn.ssu.ac.kr>

Implements: blueprint vnf-rolling-upgrade
Change-Id: I34b82fd40830dd74d0f5ef24a60b3ff465cd4819
changes/57/681157/21
Jangwon Lee 2 years ago committed by JangwonLee
parent
commit
df0ba6b7e0
  1. 2
      .zuul.yaml
  2. 9
      devstack/lib/tacker
  3. 4
      devstack/local.conf.example
  4. 5
      devstack/plugin.sh
  5. 1
      doc/source/reference/index.rst
  6. 183
      doc/source/reference/maintenance_usage_guide.rst
  7. 34
      etc/ceilometer/maintenance_event_types.yaml
  8. 2
      setup.cfg
  9. 35
      tacker/alarm_receiver.py
  10. 39
      tacker/extensions/vnfm.py
  11. 4
      tacker/objects/heal_vnf_request.py
  12. 6
      tacker/plugins/common/constants.py
  13. 456
      tacker/plugins/fenix.py
  14. 51
      tacker/tests/etc/samples/sample-tosca-vnfd-maintenance.yaml
  15. 7
      tacker/tests/functional/base.py
  16. 194
      tacker/tests/functional/vnfm/test_tosca_vnf_maintenance.py
  17. 1
      tacker/tests/unit/vnfm/infra_drivers/openstack/test_vdu.py
  18. 22
      tacker/tests/unit/vnfm/test_k8s_plugin.py
  19. 38
      tacker/tests/unit/vnfm/test_monitor.py
  20. 30
      tacker/tests/unit/vnfm/test_plugin.py
  21. 4
      tacker/tosca/lib/tacker_nfv_defs.yaml
  22. 66
      tacker/tosca/utils.py
  23. 21
      tacker/vnfm/infra_drivers/openstack/openstack.py
  24. 4
      tacker/vnfm/infra_drivers/openstack/translate_template.py
  25. 17
      tacker/vnfm/infra_drivers/openstack/vdu.py
  26. 39
      tacker/vnfm/monitor.py
  27. 62
      tacker/vnfm/plugin.py
  28. 20
      tacker/vnfm/policy_actions/vdu_autoheal/vdu_autoheal.py

2
.zuul.yaml

@ -67,6 +67,7 @@
- openstack/python-tackerclient
- openstack/tacker
- openstack/tacker-horizon
- x/fenix
vars:
devstack_localrc:
CELLSV2_SETUP: singleconductor
@ -93,6 +94,7 @@
mistral: https://opendev.org/openstack/mistral
tacker: https://opendev.org/openstack/tacker
blazar: https://opendev.org/openstack/blazar
fenix: https://opendev.org/x/fenix
devstack_services:
# Core services enabled for this branch.
# This list replaces the test-matrix.

9
devstack/lib/tacker

@ -81,6 +81,7 @@ TACKER_NOVA_CA_CERTIFICATES_FILE=${TACKER_NOVA_CA_CERTIFICATES_FILE:-}
TACKER_NOVA_API_INSECURE=${TACKER_NOVA_API_INSECURE:-False}
HEAT_CONF_DIR=/etc/heat
CEILOMETER_CONF_DIR=/etc/ceilometer
source ${TACKER_DIR}/tacker/tests/contrib/post_test_hook_lib.sh
@ -480,3 +481,11 @@ function modify_heat_flavor_policy_rule {
# Allow non-admin projects with 'admin' roles to create flavors in Heat
echo '"resource_types:OS::Nova::Flavor": "role:admin"' >> $policy_file
}
function configure_maintenance_event_types {
local event_definitions_file=$CEILOMETER_CONF_DIR/event_definitions.yaml
local maintenance_events_file=$TACKER_DIR/etc/ceilometer/maintenance_event_types.yaml
echo "Configure maintenance event types to $event_definitions_file"
cat $maintenance_events_file >> $event_definitions_file
}

4
devstack/local.conf.example

@ -43,12 +43,16 @@ enable_plugin mistral https://opendev.org/openstack/mistral master
# Ceilometer
#CEILOMETER_PIPELINE_INTERVAL=300
CEILOMETER_EVENT_ALARM=True
enable_plugin ceilometer https://opendev.org/openstack/ceilometer master
enable_plugin aodh https://opendev.org/openstack/aodh master
# Blazar
enable_plugin blazar https://github.com/openstack/blazar.git master
# Fenix
enable_plugin fenix https://opendev.org/x/fenix.git master
# Tacker
enable_plugin tacker https://opendev.org/openstack/tacker master

5
devstack/plugin.sh

@ -41,6 +41,11 @@ if is_service_enabled tacker; then
tacker_check_and_download_images
echo_summary "Registering default VIM"
tacker_register_default_vim
if is_service_enabled ceilometer; then
echo_summary "Configure maintenance event types"
configure_maintenance_event_types
fi
fi
fi

1
doc/source/reference/index.rst

@ -24,3 +24,4 @@ Reference
mistral_workflows_usage_guide.rst
block_storage_usage_guide.rst
reservation_policy_usage_guide.rst
maintenance_usage_guide.rst

183
doc/source/reference/maintenance_usage_guide.rst

@ -0,0 +1,183 @@
..
Copyright 2020 Distributed Cloud and Network (DCN)
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
================================
VNF zero impact host maintenance
================================
Tacker allows you to maintenance host with VNF zero impact. Maintenance
workflows will be performed in the ``Fenix`` service by creating a session
which can do scaling, migrating VNFs and patch hosts.
References
~~~~~~~~~~
- `Fenix <https://fenix.readthedocs.io/en/latest/>`_.
- `Fenix Configuration Guide <https://fenix.readthedocs.io/en/latest/configuration/dependencies.html>`_.
Installation and configurations
-------------------------------
1. You need Fenix, Ceilometer and Aodh OpenStack services.
2. Modify the below configuration files:
/etc/ceilometer/event_pipeline.yaml
.. code-block:: yaml
sinks:
- name: event_sink
publishers:
- panko://
- notifier://
- notifier://?topic=alarm.all
/etc/ceilometer/event_definitions.yaml:
.. code-block:: yaml
- event_type: 'maintenance.scheduled'
traits:
service:
fields: payload.service
allowed_actions:
fields: payload.allowed_actions
instance_ids:
fields: payload.instance_ids
reply_url:
fields: payload.reply_url
state:
fields: payload.state
session_id:
fields: payload.session_id
actions_at:
fields: payload.actions_at
type: datetime
project_id:
fields: payload.project_id
reply_at:
fields: payload.reply_at
type: datetime
metadata:
fields: payload.metadata
- event_type: 'maintenance.host'
traits:
host:
fields: payload.host
project_id:
fields: payload.project_id
session_id:
fields: payload.session_id
state:
fields: payload.state
Deploying maintenance tosca template with tacker
------------------------------------------------
When template is normal
~~~~~~~~~~~~~~~~~~~~~~~
If ``Fenix`` service is enabled and maintenance event_types are defined, then
all VNF created by legacy VNFM will get ``ALL_MAINTENANCE`` resource in Stack.
.. code-block:: yaml
resources:
ALL_maintenance:
properties:
alarm_actions:
- http://openstack-master:9890/v1.0/vnfs/e8b9bec5-541b-492c-954e-cd4af71eda1f/maintenance/0cc65f4bba9c42bfadf4aebec6ae7348/hbyhgkav
event_type: maintenance.scheduled
type: OS::Aodh::EventAlarm
When template has maintenance property
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If VDU in VNFD has maintenance property, then VNFM creates
``[VDU_NAME]_MAINTENANCE`` alarm resources and will be use for VNF software
modification later. This is not works yet. It will be updated.
``Sample tosca-template``:
.. code-block:: yaml
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: VNF TOSCA template with maintenance
metadata:
template_name: sample-tosca-vnfd-maintenance
topology_template:
node_templates:
VDU1:
type: tosca.nodes.nfv.VDU.Tacker
properties:
maintenance: True
image: cirros-0.4.0-x86_64-disk
capabilities:
nfv_compute:
properties:
disk_size: 1 GB
mem_size: 512 MB
num_cpus: 2
CP1:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
order: 0
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1
VL1:
type: tosca.nodes.nfv.VL
properties:
network_name: net_mgmt
vendor: Tacker
policies:
- SP1:
type: tosca.policies.tacker.Scaling
properties:
increment: 1
cooldown: 120
min_instances: 1
max_instances: 3
default_instances: 2
targets: [VDU1]
Configure maintenance constraints with config yaml
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When ``Fenix`` does maintenance, it requires some constraints for zero impact.
Like below config file, each VNF can set and update constraints.
.. code-block:: yaml
maintenance:
max_impacted_members: 1
recovery_time: 60,
mitigation_type: True,
lead_time: 120,
migration_type: 'MIGRATE'

34
etc/ceilometer/maintenance_event_types.yaml

@ -0,0 +1,34 @@
- event_type: 'maintenance.scheduled'
traits:
service:
fields: payload.service
allowed_actions:
fields: payload.allowed_actions
instance_ids:
fields: payload.instance_ids
reply_url:
fields: payload.reply_url
state:
fields: payload.state
session_id:
fields: payload.session_id
actions_at:
fields: payload.actions_at
type: datetime
project_id:
fields: payload.project_id
reply_at:
fields: payload.reply_at
type: datetime
metadata:
fields: payload.metadata
- event_type: 'maintenance.host'
traits:
host:
fields: payload.host
project_id:
fields: payload.project_id
session_id:
fields: payload.session_id
state:
fields: payload.state

2
setup.cfg

@ -93,6 +93,8 @@ oslo.config.opts =
tacker.vnfm.monitor_drivers.ceilometer.ceilometer = tacker.vnfm.monitor_drivers.ceilometer.ceilometer:config_opts
tacker.vnfm.monitor_drivers.zabbix.zabbix = tacker.vnfm.monitor_drivers.zabbix.zabbix:config_opts
tacker.alarm_receiver = tacker.alarm_receiver:config_opts
tacker.plugins.fenix = tacker.plugins.fenix:config_opts
mistral.actions =
tacker.vim_ping_action = tacker.nfvo.workflows.vim_monitor.vim_ping_action:PingVimAction

35
tacker/alarm_receiver.py

@ -52,6 +52,8 @@ class AlarmReceiver(wsgi.Middleware):
if not self.handle_url(url):
return
prefix, info, params = self.handle_url(req.url)
resource = 'trigger' if info[4] != 'maintenance' else 'maintenance'
redirect = resource + 's'
auth = cfg.CONF.keystone_authtoken
alarm_auth = cfg.CONF.alarm_auth
token = Token(username=alarm_auth.username,
@ -66,19 +68,24 @@ class AlarmReceiver(wsgi.Middleware):
# Change the body request
if req.body:
body_dict = dict()
body_dict['trigger'] = {}
body_dict['trigger'].setdefault('params', {})
body_dict[resource] = {}
body_dict[resource].setdefault('params', {})
# Update params in the body request
body_info = jsonutils.loads(req.body)
body_dict['trigger']['params']['data'] = body_info
body_dict['trigger']['params']['credential'] = info[6]
# Update policy and action
body_dict['trigger']['policy_name'] = info[4]
body_dict['trigger']['action_name'] = info[5]
body_dict[resource]['params']['credential'] = info[6]
if resource == 'maintenance':
body_info.update({
'body': self._handle_maintenance_body(body_info)})
del body_info['reason_data']
else:
# Update policy and action
body_dict[resource]['policy_name'] = info[4]
body_dict[resource]['action_name'] = info[5]
body_dict[resource]['params']['data'] = body_info
req.body = jsonutils.dump_as_bytes(body_dict)
LOG.debug('Body alarm: %s', req.body)
# Need to change url because of mandatory
req.environ['PATH_INFO'] = prefix + 'triggers'
req.environ['PATH_INFO'] = prefix + redirect
req.environ['QUERY_STRING'] = ''
LOG.debug('alarm url in receiver: %s', req.url)
@ -98,3 +105,15 @@ class AlarmReceiver(wsgi.Middleware):
prefix_url = '/%(collec)s/%(vnf_uuid)s/' % {'collec': p[2],
'vnf_uuid': p[3]}
return prefix_url, p, params
def _handle_maintenance_body(self, body_info):
body = {}
traits_list = body_info['reason_data']['event']['traits']
if type(traits_list) is not list:
return
for key, t_type, val in traits_list:
if t_type == 1 and val and (val[0] == '[' or val[0] == '{'):
body[key] = eval(val)
else:
body[key] = val
return body

39
tacker/extensions/vnfm.py

@ -208,6 +208,10 @@ class InvalidInstReqInfoForScaling(exceptions.InvalidInput):
"fixed ip_address or mac_address.")
class InvalidMaintenanceParameter(exceptions.InvalidInput):
message = _("Could not find the required params for maintenance")
def _validate_service_type_list(data, valid_values=None):
if not isinstance(data, list):
msg = _("Invalid data format for service list: '%s'") % data
@ -491,6 +495,37 @@ SUB_RESOURCE_ATTRIBUTE_MAP = {
}
}
}
},
'maintenances': {
'parent': {
'collection_name': 'vnfs',
'member_name': 'vnf'
},
'members': {
'maintenance': {
'parameters': {
'params': {
'allow_post': True,
'allow_put': False,
'is_visible': True,
'validate': {'type:dict_or_none': None}
},
'tenant_id': {
'allow_post': True,
'allow_put': False,
'validate': {'type:string': None},
'required_by_policy': False,
'is_visible': False
},
'response': {
'allow_post': False,
'allow_put': False,
'validate': {'type:dict_or_none': None},
'is_visible': True
}
}
}
}
}
}
@ -623,3 +658,7 @@ class VNFMPluginBase(service_base.NFVPluginBase):
def create_vnf_trigger(
self, context, vnf_id, trigger):
pass
@abc.abstractmethod
def create_vnf_maintenance(self, context, vnf_id, maintenance):
pass

4
tacker/objects/heal_vnf_request.py

@ -36,11 +36,13 @@ class HealVnfRequest(base.TackerObject):
# Version 1.0: Initial version
# Version 1.1: Added vnf_instance_id
VERSION = '1.1'
# Version 1.2: Added stack_id for nested heat-template
VERSION = '1.2'
fields = {
'vnfc_instance_id': fields.ListOfStringsField(nullable=True,
default=[]),
'stack_id': fields.StringField(nullable=True, default=''),
'cause': fields.StringField(nullable=True, default=None),
'additional_params': fields.ListOfObjectsField(
'HealVnfAdditionalParams', default=[])

6
tacker/plugins/common/constants.py

@ -30,6 +30,7 @@ COMMON_PREFIXES = {
# Service operation status constants
ACTIVE = "ACTIVE"
ACK = "ACK"
PENDING_CREATE = "PENDING_CREATE"
PENDING_UPDATE = "PENDING_UPDATE"
@ -40,6 +41,7 @@ PENDING_HEAL = "PENDING_HEAL"
DEAD = "DEAD"
ERROR = "ERROR"
NACK = "NACK"
ACTIVE_PENDING_STATUSES = (
ACTIVE,
@ -72,6 +74,10 @@ RES_EVT_SCALE = "SCALE"
RES_EVT_NA_STATE = "Not Applicable"
RES_EVT_ONBOARDED = "OnBoarded"
RES_EVT_HEAL = "HEAL"
RES_EVT_MAINTENANCE = [
"MAINTENANCE", "SCALE_IN", "MAINTENANCE_COMPLETE",
"PREPARE_MAINTENANCE", "PLANNED_MAINTENANCE", "INSTANCE_ACTION_DONE"
]
VNF_STATUS_TO_EVT_TYPES = {PENDING_CREATE: RES_EVT_CREATE,

456
tacker/plugins/fenix.py

@ -0,0 +1,456 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import requests
import time
import yaml
from oslo_config import cfg
from oslo_serialization import jsonutils
from tacker.common import clients
from tacker.common import log
from tacker.extensions import vnfm
from tacker.plugins.common import constants
from tacker.vnfm import vim_client
CONF = cfg.CONF
OPTS = [
cfg.IntOpt('lead_time', default=120,
help=_('Time for migration_type operation')),
cfg.IntOpt('max_interruption_time', default=120,
help=_('Time for how long live migration can take')),
cfg.IntOpt('recovery_time', default=2,
help=_('Time for migrated node could be fully running state')),
cfg.IntOpt('request_retries',
default=5,
help=_("Number of attempts to retry for request")),
cfg.IntOpt('request_retry_wait',
default=5,
help=_("Wait time (in seconds) between consecutive request"))
]
CONF.register_opts(OPTS, 'fenix')
MAINTENANCE_KEYS = (
'instance_ids', 'session_id', 'state', 'reply_url'
)
MAINTENANCE_SUB_KEYS = {
'PREPARE_MAINTENANCE': [('allowed_actions', 'list'),
('instance_ids', 'list')],
'PLANNED_MAINTENANCE': [('allowed_actions', 'list'),
('instance_ids', 'list')]
}
def config_opts():
return [('fenix', OPTS)]
class FenixPlugin(object):
def __init__(self):
self.REQUEST_RETRIES = cfg.CONF.fenix.request_retries
self.REQUEST_RETRY_WAIT = cfg.CONF.fenix.request_retry_wait
self.endpoint = None
self._instances = {}
self.vim_client = vim_client.VimClient()
@log.log
def request(self, plugin, context, vnf_dict, maintenance={},
data_func=None):
params_list = [maintenance]
method = 'put'
is_reply = True
if data_func:
action, create_func = data_func.split('_', 1)
create_func = '_create_%s_list' % create_func
if action in ['update', 'delete'] and hasattr(self, create_func):
params_list = getattr(self, create_func)(
context, vnf_dict, action)
method = action if action == 'delete' else 'put'
is_reply = False
for params in params_list:
self._request(plugin, context, vnf_dict, params, method, is_reply)
return len(params_list)
@log.log
def create_vnf_constraints(self, plugin, context, vnf_dict):
self.update_vnf_constraints(plugin, context, vnf_dict,
objects=['instance_group',
'project_instance'])
@log.log
def delete_vnf_constraints(self, plugin, context, vnf_dict):
self.update_vnf_constraints(plugin, context, vnf_dict,
action='delete',
objects=['instance_group',
'project_instance'])
@log.log
def update_vnf_instances(self, plugin, context, vnf_dict,
action='update'):
requests = self.update_vnf_constraints(plugin, context,
vnf_dict, action,
objects=['project_instance'])
if requests[0]:
self.post(context, vnf_dict)
@log.log
def update_vnf_constraints(self, plugin, context, vnf_dict,
action='update', objects=[]):
result = []
for obj in objects:
requests = self.request(plugin, context, vnf_dict,
data_func='%s_%s' % (action, obj))
result.append(requests)
return result
@log.log
def post(self, context, vnf_dict, **kwargs):
post_function = getattr(context, 'maintenance_post_function', None)
if not post_function:
return
post_function(context, vnf_dict)
del context.maintenance_post_function
@log.log
def project_instance_pre(self, context, vnf_dict):
key = vnf_dict['id']
if key not in self._instances:
self._instances.update({
key: self._get_instances(context, vnf_dict)})
@log.log
def validate_maintenance(self, maintenance):
body = maintenance['maintenance']['params']['data']['body']
if not set(MAINTENANCE_KEYS).issubset(body) or \
body['state'] not in constants.RES_EVT_MAINTENANCE:
raise vnfm.InvalidMaintenanceParameter()
sub_keys = MAINTENANCE_SUB_KEYS.get(body['state'], ())
for key, val_type in sub_keys:
if key not in body or type(body[key]) is not eval(val_type):
raise vnfm.InvalidMaintenanceParameter()
return body
@log.log
def _request(self, plugin, context, vnf_dict, maintenance,
method='put', is_reply=True):
client = self._get_openstack_clients(context, vnf_dict)
if not self.endpoint:
self.endpoint = client.keystone_session.get_endpoint(
service_type='maintenance', region_name=client.region_name)
if not self.endpoint:
raise vnfm.ServiceTypeNotFound(service_type_id='maintenance')
if 'reply_url' in maintenance:
url = maintenance['reply_url']
elif 'url' in maintenance:
url = "%s/%s" % (self.endpoint.rstrip('/'),
maintenance['url'].strip('/'))
else:
return
def create_headers():
return {
'X-Auth-Token': client.keystone_session.get_token(),
'Content-Type': 'application/json',
'Accept': 'application/json'
}
request_body = {}
request_body['headers'] = create_headers()
state = constants.ACK if vnf_dict['status'] == constants.ACTIVE \
else constants.NACK
if method == 'put':
data = maintenance.get('data', {})
if is_reply:
data['session_id'] = maintenance.get('session_id', '')
data['state'] = "%s_%s" % (state, maintenance['state'])
request_body['data'] = jsonutils.dump_as_bytes(data)
def request_wait():
retries = self.REQUEST_RETRIES
while retries > 0:
response = getattr(requests, method)(url, **request_body)
if response.status_code == 200:
break
else:
retries -= 1
time.sleep(self.REQUEST_RETRY_WAIT)
plugin.spawn_n(request_wait)
@log.log
def handle_maintenance(self, plugin, context, maintenance):
action = '_create_%s' % maintenance['state'].lower()
maintenance['data'] = {}
if hasattr(self, action):
getattr(self, action)(plugin, context, maintenance)
@log.log
def _create_maintenance(self, plugin, context, maintenance):
vnf_dict = maintenance.get('vnf', {})
vnf_dict['attributes'].update({'maintenance_scaled': 0})
plugin._update_vnf_post(context, vnf_dict['id'], constants.ACTIVE,
vnf_dict, constants.ACTIVE,
constants.RES_EVT_UPDATE)
instances = self._get_instances(context, vnf_dict)
instance_ids = [x['id'] for x in instances]
maintenance['data'].update({'instance_ids': instance_ids})
@log.log
def _create_scale_in(self, plugin, context, maintenance):
def post_function(context, vnf_dict):
scaled = int(vnf_dict['attributes'].get('maintenance_scaled', 0))
vnf_dict['attributes']['maintenance_scaled'] = str(scaled + 1)
plugin._update_vnf_post(context, vnf_dict['id'], constants.ACTIVE,
vnf_dict, constants.ACTIVE,
constants.RES_EVT_UPDATE)
instances = self._get_instances(context, vnf_dict)
instance_ids = [x['id'] for x in instances]
maintenance['data'].update({'instance_ids': instance_ids})
self.request(plugin, context, vnf_dict, maintenance)
vnf_dict = maintenance.get('vnf', {})
policy_action = self._create_scale_dict(plugin, context, vnf_dict)
if policy_action:
maintenance.update({'policy_action': policy_action})
context.maintenance_post_function = post_function
@log.log
def _create_prepare_maintenance(self, plugin, context, maintenance):
self._create_planned_maintenance(plugin, context, maintenance)
@log.log
def _create_planned_maintenance(self, plugin, context, maintenance):
def post_function(context, vnf_dict):
migration_type = self._get_constraints(vnf_dict,
key='migration_type',
default='MIGRATE')
maintenance['data'].update({'instance_action': migration_type})
self.request(plugin, context, vnf_dict, maintenance)
vnf_dict = maintenance.get('vnf', {})
instances = self._get_instances(context, vnf_dict)
request_instance_id = maintenance['instance_ids'][0]
selected = None
for instance in instances:
if instance['id'] == request_instance_id:
selected = instance
break
if not selected:
vnfm.InvalidMaintenanceParameter()
migration_type = self._get_constraints(vnf_dict, key='migration_type',
default='MIGRATE')
if migration_type == 'OWN_ACTION':
policy_action = self._create_migrate_dict(context, vnf_dict,
selected)
maintenance.update({'policy_action': policy_action})
context.maintenance_post_function = post_function
else:
post_function(context, vnf_dict)
@log.log
def _create_maintenance_complete(self, plugin, context, maintenance):
def post_function(context, vnf_dict):
vim_res = self.vim_client.get_vim(context, vnf_dict['vim_id'])
scaled = int(vnf_dict['attributes'].get('maintenance_scaled', 0))
if vim_res['vim_type'] == 'openstack':
scaled -= 1
vnf_dict['attributes']['maintenance_scaled'] = str(scaled)
plugin._update_vnf_post(context, vnf_dict['id'],
constants.ACTIVE, vnf_dict,
constants.ACTIVE,
constants.RES_EVT_UPDATE)
if scaled > 0:
scale_out(plugin, context, vnf_dict)
else:
instances = self._get_instances(context, vnf_dict)
instance_ids = [x['id'] for x in instances]
maintenance['data'].update({'instance_ids': instance_ids})
self.request(plugin, context, vnf_dict, maintenance)
def scale_out(plugin, context, vnf_dict):
policy_action = self._create_scale_dict(plugin, context, vnf_dict,
scale_type='out')
context.maintenance_post_function = post_function
plugin._vnf_action.invoke(policy_action['action'],
'execute_action', plugin=plugin,
context=context, vnf_dict=vnf_dict,
args=policy_action['args'])
vnf_dict = maintenance.get('vnf', {})
scaled = vnf_dict.get('attributes', {}).get('maintenance_scaled', 0)
if int(scaled):
policy_action = self._create_scale_dict(plugin, context, vnf_dict,
scale_type='out')
maintenance.update({'policy_action': policy_action})
context.maintenance_post_function = post_function
@log.log
def _create_scale_dict(self, plugin, context, vnf_dict, scale_type='in'):
policy_action, scale_dict = {}, {}
policies = self._get_scaling_policies(plugin, context, vnf_dict)
if not policies:
return
scale_dict['type'] = scale_type
scale_dict['policy'] = policies[0]['name']
policy_action['action'] = 'autoscaling'
policy_action['args'] = {'scale': scale_dict}
return policy_action
@log.log
def _create_migrate_dict(self, context, vnf_dict, instance):
policy_action, heal_dict = {}, {}
heal_dict['vdu_name'] = instance['name']
heal_dict['cause'] = ["Migrate resource '%s' to other host."]
heal_dict['stack_id'] = instance['stack_name']
if 'scaling_group_names' in vnf_dict['attributes']:
sg_names = vnf_dict['attributes']['scaling_group_names']
sg_names = list(jsonutils.loads(sg_names).keys())
heal_dict['heat_tpl'] = '%s_res.yaml' % sg_names[0]
policy_action['action'] = 'vdu_autoheal'
policy_action['args'] = heal_dict
return policy_action
@log.log
def _create_instance_group_list(self, context, vnf_dict, action):
group_id = vnf_dict['attributes'].get('maintenance_group', '')
if not group_id:
return
def get_constraints(data):
maintenance_config = self._get_constraints(vnf_dict)
data['max_impacted_members'] = maintenance_config.get(
'max_impacted_members', 1)
data['recovery_time'] = maintenance_config.get('recovery_time', 60)
params, data = {}, {}
params['url'] = '/instance_group/%s' % group_id
if action == 'update':
data['group_id'] = group_id
data['project_id'] = vnf_dict['tenant_id']
data['group_name'] = 'tacker_nonha_app_group_%s' % vnf_dict['id']
data['anti_affinity_group'] = False
data['max_instances_per_host'] = 0
data['resource_mitigation'] = True
get_constraints(data)
params.update({'data': data})
return [params]
@log.log
def _create_project_instance_list(self, context, vnf_dict, action):
group_id = vnf_dict.get('attributes', {}).get('maintenance_group', '')
if not group_id:
return
params_list = []
url = '/instance'
instances = self._get_instances(context, vnf_dict)
_instances = self._instances.get(vnf_dict['id'], {})
if _instances:
if action == 'update':
instances = [v for v in instances if v not in _instances]
del self._instances[vnf_dict['id']]
else:
instances = [v for v in _instances if v not in instances]
if len(instances) != len(_instances):
del self._instances[vnf_dict['id']]
if action == 'update':
maintenance_configs = self._get_constraints(vnf_dict)
for instance in instances:
params, data = {}, {}
params['url'] = '%s/%s' % (url, instance['id'])
data['project_id'] = instance['project_id']
data['instance_id'] = instance['id']
data['instance_name'] = instance['name']
data['migration_type'] = maintenance_configs.get(
'migration_type', 'MIGRATE')
data['resource_mitigation'] = maintenance_configs.get(
'mitigation_type', True)
data['max_interruption_time'] = maintenance_configs.get(
'max_interruption_time',
cfg.CONF.fenix.max_interruption_time)
data['lead_time'] = maintenance_configs.get(
'lead_time', cfg.CONF.fenix.lead_time)
data['group_id'] = group_id
params.update({'data': data})
params_list.append(params)
elif action == 'delete':
for instance in instances:
params = {}
params['url'] = '%s/%s' % (url, instance['id'])
params_list.append(params)
return params_list
@log.log
def _get_instances(self, context, vnf_dict):
vim_res = self.vim_client.get_vim(context, vnf_dict['vim_id'])
action = '_get_instances_with_%s' % vim_res['vim_type']
if hasattr(self, action):
return getattr(self, action)(context, vnf_dict)
return {}
@log.log
def _get_instances_with_openstack(self, context, vnf_dict):
def get_attrs_with_link(links):
attrs = {}
for link in links:
href, rel = link['href'], link['rel']
if rel == 'self':
words = href.split('/')
attrs['project_id'] = words[5]
attrs['stack_name'] = words[7]
break
return attrs
instances = []
client = self._get_openstack_clients(context, vnf_dict)
resources = client.heat.resources.list(vnf_dict['instance_id'],
nested_depth=2)
for resource in resources:
if resource.resource_type == 'OS::Nova::Server' and \
resource.resource_status != 'DELETE_IN_PROGRESS':
instance = {
'id': resource.physical_resource_id,
'name': resource.resource_name
}
instance.update(get_attrs_with_link(resource.links))
instances.append(instance)
return instances
@log.log
def _get_scaling_policies(self, plugin, context, vnf_dict):
vnf_id = vnf_dict['id']
policies = []
if 'scaling_group_names' in vnf_dict['attributes']:
policies = plugin.get_vnf_policies(
context, vnf_id, filters={'type': constants.POLICY_SCALING})
return policies
@log.log
def _get_constraints(self, vnf, key=None, default=None):
config = vnf.get('attributes', {}).get('config', '{}')
maintenance_config = yaml.safe_load(config).get('maintenance', {})
if key:
return maintenance_config.get(key, default)
return maintenance_config
@log.log
def _get_openstack_clients(self, context, vnf_dict):
vim_res = self.vim_client.get_vim(context, vnf_dict['vim_id'])
region_name = vnf_dict.setdefault('placement_attr', {}).get(
'region_name', None)
client = clients.OpenstackClients(auth_attr=vim_res['vim_auth'],
region_name=region_name)
return client

51
tacker/tests/etc/samples/sample-tosca-vnfd-maintenance.yaml

@ -0,0 +1,51 @@
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: Maintenance VNF with Fenix
metadata:
template_name: tosca-vnfd-maintenance
topology_template:
node_templates:
VDU1:
capabilities:
nfv_compute:
properties:
disk_size: 15 GB
mem_size: 2048 MB
num_cpus: 2
properties:
availability_zone: nova
image: cirros-0.4.0-x86_64-disk
maintenance: true
mgmt_driver: noop
type: tosca.nodes.nfv.VDU.Tacker
CP11:
properties:
anti_spoofing_protection: false
management: true
order: 0
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1
type: tosca.nodes.nfv.CP.Tacker
VL1:
properties:
network_name: net_mgmt
vendor: Tacker
type: tosca.nodes.nfv.VL
policies:
- SP1:
properties:
cooldown: 120
default_instances: 3
increment: 1
max_instances: 3
min_instances: 1
targets:
- VDU1
type: tosca.policies.tacker.Scaling

7
tacker/tests/functional/base.py

@ -191,6 +191,13 @@ class BaseTackerTest(base.BaseTestCase):
auth_ses = session.Session(auth=auth, verify=verify)
return glance_client.Client(session=auth_ses)
@classmethod
def aodh_http_client(cls):
auth_session = cls.get_auth_session()
return SessionClient(session=auth_session,
service_type='alarming',
region_name='RegionOne')
def get_vdu_resource(self, stack_id, res_name):
return self.h_client.resources.get(stack_id, res_name)

194
tacker/tests/functional/vnfm/test_tosca_vnf_maintenance.py

@ -0,0 +1,194 @@
# Copyright 2020 Distributed Cloud and Network (DCN)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from datetime import datetime
import time
import yaml
from oslo_serialization import jsonutils
from oslo_utils import uuidutils
from tacker.plugins.common import constants as evt_constants
from tacker.tests import constants
from tacker.tests.functional import base
from tacker.tests.utils import read_file
class VnfTestMaintenanceMonitor(base.BaseTackerTest):
def _test_vnf_tosca_maintenance(self, vnfd_file, vnf_name):
input_yaml = read_file(vnfd_file)
tosca_dict = yaml.safe_load(input_yaml)
tosca_arg = {'vnfd': {'name': vnf_name,
'attributes': {'vnfd': tosca_dict}}}
# Create vnfd with tosca template
vnfd_instance = self.client.create_vnfd(body=tosca_arg)
self.assertIsNotNone(vnfd_instance)
# Create vnf with vnfd_id
vnfd_id = vnfd_instance['vnfd']['id']
vnf_arg = {'vnf': {'vnfd_id': vnfd_id, 'name': vnf_name}}
vnf_instance = self.client.create_vnf(body=vnf_arg)
vnf_id = vnf_instance['vnf']['id']
self.validate_vnf_instance(vnfd_instance, vnf_instance)
def _wait_vnf_active_and_assert_vdu_count(vdu_count, scale_type=None):
self.wait_until_vnf_active(
vnf_id,
constants.VNF_CIRROS_CREATE_TIMEOUT,
constants.ACTIVE_SLEEP_TIME)
vnf = self.client.show_vnf(vnf_id)['vnf']
self.assertEqual(vdu_count, len(jsonutils.loads(
vnf['mgmt_ip_address'])['VDU1']))
def _verify_maintenance_attributes(vnf_dict):
vnf_attrs = vnf_dict.get('attributes', {})
maintenance_vdus = vnf_attrs.get('maintenance', '{}')
maintenance_vdus = jsonutils.loads(maintenance_vdus)
maintenance_url = vnf_attrs.get('maintenance_url', '')
words = maintenance_url.split('/')
self.assertEqual(len(maintenance_vdus.keys()), 2)
self.assertEqual(len(words), 8)
self.assertEqual(words[5], vnf_dict['id'])
self.assertEqual(words[7], vnf_dict['tenant_id'])
maintenance_urls = {}
for vdu, access_key in maintenance_vdus.items():
maintenance_urls[vdu] = maintenance_url + '/' + access_key
return maintenance_urls
def _verify_maintenance_alarm(url, project_id):
aodh_client = self.aodh_http_client()
alarm_query = {
'and': [
{'=': {'project_id': project_id}},
{'=~': {'alarm_actions': url}}]}
# Check alarm instance for MAINTENANCE_ALL
alarm_url = 'v2/query/alarms'
encoded_data = jsonutils.dumps(alarm_query)
encoded_body = jsonutils.dumps({'filter': encoded_data})
resp, response_body = aodh_client.do_request(alarm_url, 'POST',
body=encoded_body)
self.assertEqual(len(response_body), 1)
alarm_dict = response_body[0]
self.assertEqual(url, alarm_dict.get('alarm_actions', [])[0])
return response_body[0]
def _verify_maintenance_actions(vnf_dict, alarm_dict):
tacker_client = self.tacker_http_client()
alarm_url = alarm_dict.get('alarm_actions', [])[0]
tacker_url = '/%s' % alarm_url[alarm_url.find('v1.0'):]
def _request_maintenance_action(state):
alarm_body = _create_alarm_data(vnf_dict, alarm_dict, state)
resp, response_body = tacker_client.do_request(
tacker_url, 'POST', body=alarm_body)
time.sleep(constants.SCALE_SLEEP_TIME)
target_scaled = -1
if state == 'SCALE_IN':
target_scaled = 1
_wait_vnf_active_and_assert_vdu_count(2, scale_type='in')
elif state == 'MAINTENANCE_COMPLETE':
target_scaled = 0
_wait_vnf_active_and_assert_vdu_count(3, scale_type='out')
updated_vnf = self.client.show_vnf(vnf_id)['vnf']
scaled = updated_vnf['attributes'].get('maintenance_scaled',
'-1')
self.assertEqual(int(scaled), target_scaled)
time.sleep(constants.SCALE_WINDOW_SLEEP_TIME)
time.sleep(constants.SCALE_WINDOW_SLEEP_TIME)
_request_maintenance_action('SCALE_IN')
_request_maintenance_action('MAINTENANCE_COMPLETE')
self.verify_vnf_crud_events(
vnf_id, evt_constants.RES_EVT_SCALE,
evt_constants.ACTIVE, cnt=2)
self.verify_vnf_crud_events(
vnf_id, evt_constants.RES_EVT_SCALE,
evt_constants.PENDING_SCALE_OUT, cnt=1)
self.verify_vnf_crud_events(
vnf_id, evt_constants.RES_EVT_SCALE,
evt_constants.PENDING_SCALE_IN, cnt=1)
def _create_alarm_data(vnf_dict, alarm_dict, state):
'''This function creates a raw payload of alarm to trigger Tacker directly.
This function creates a raw payload which Fenix will put
when Fenix process maintenance procedures. Alarm_receiver and
specific steps of Fenix workflow will be tested by sending the raw
to Tacker directly.
'''
utc_time = datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%SZ')
fake_url = 'http://localhost/'
sample_data = {
'alarm_name': alarm_dict['name'],
'alarm_id': alarm_dict['alarm_id'],
'severity': 'low',
'previous': 'alarm',
'current': 'alarm',
'reason': 'Alarm test for Tacker functional test',
'reason_data': {
'type': 'event',
'event': {
'message_id': uuidutils.generate_uuid(),
'event_type': 'maintenance.scheduled',
'generated': utc_time,
'traits': [
['project_id', 1, vnf_dict['tenant_id']],
['allowed_actions', 1, '[]'],
['instance_ids', 1, fake_url],
['reply_url', 1, fake_url],
['state', 1, state],
['session_id', 1, uuidutils.generate_uuid()],
['actions_at', 4, utc_time],
['reply_at', 4, utc_time],
['metadata', 1, '{}']
],
'raw': {},
'message_signature': uuidutils.generate_uuid()
}
}
}
return jsonutils.dumps(sample_data)
_wait_vnf_active_and_assert_vdu_count(3)
urls = _verify_maintenance_attributes(vnf_instance['vnf'])
maintenance_url = urls.get('ALL', '')
project_id = vnf_instance['vnf']['tenant_id']
alarm_dict = _verify_maintenance_alarm(maintenance_url, project_id)
_verify_maintenance_actions(vnf_instance['vnf'], alarm_dict)
try:
self.client.delete_vnf(vnf_id)
except Exception:
assert False, (
'Failed to delete vnf %s after the maintenance test' % vnf_id)
self.addCleanup(self.client.delete_vnfd, vnfd_id)
self.addCleanup(self.wait_until_vnf_delete, vnf_id,
constants.VNF_CIRROS_DELETE_TIMEOUT)
def test_vnf_alarm_maintenance(self):
# instance_maintenance = self._get_instance_maintenance()
self._test_vnf_tosca_maintenance(
'sample-tosca-vnfd-maintenance.yaml',
'maintenance_vnf')

1
tacker/tests/unit/vnfm/infra_drivers/openstack/test_vdu.py

@ -107,6 +107,7 @@ class TestVDU(base.TestCase):
cause=["Unable to reach while monitoring resource: 'VDU1'"])
self.heal_request_data_obj = heal_vnf_request.HealVnfRequest(
cause='VNF monitoring fails.',
stack_id=vnf_dict['instance_id'],
additional_params=[self.additional_paramas_obj])
self.heal_vdu = vdu.Vdu(self.context, vnf_dict,
self.heal_request_data_obj)

22
tacker/tests/unit/vnfm/test_k8s_plugin.py

@ -32,6 +32,10 @@ class FakeCVNFMonitor(mock.Mock):
pass
class FakePlugin(mock.Mock):
pass
class FakeK8SVimClient(mock.Mock):
pass
@ -44,6 +48,8 @@ class TestCVNFMPlugin(db_base.SqlTestCase):
self._mock_vim_client()
self._stub_get_vim()
self._mock_vnf_monitor()
self._mock_vnf_maintenance_monitor()
self._mock_vnf_maintenance_plugin()
self._insert_dummy_vim()
self.vnfm_plugin = plugin.VNFMPlugin()
mock.patch('tacker.db.common_services.common_services_db_plugin.'
@ -108,6 +114,22 @@ class TestCVNFMPlugin(db_base.SqlTestCase):
self._mock(
'tacker.vnfm.monitor.VNFMonitor', fake_vnf_monitor)
def _mock_vnf_maintenance_monitor(self):
self._vnf_maintenance_mon = mock.Mock(wraps=FakeCVNFMonitor())
fake_vnf_maintenance_monitor = mock.Mock()
fake_vnf_maintenance_monitor.return_value = self._vnf_maintenance_mon
self._mock(
'tacker.vnfm.monitor.VNFMaintenanceAlarmMonitor',
fake_vnf_maintenance_monitor)
def _mock_vnf_maintenance_plugin(self):
self._vnf_maintenance_plugin = mock.Mock(wraps=FakePlugin())
fake_vnf_maintenance_plugin = mock.Mock()
fake_vnf_maintenance_plugin.return_value = self._vnf_maintenance_plugin
self._mock(
'tacker.plugins.fenix.FenixPlugin',
fake_vnf_maintenance_plugin)
def _insert_dummy_vnf_template(self):
session = self.context.session
vnf_template = vnfm_db.VNFD(

38
tacker/tests/unit/vnfm/test_monitor.py

@ -255,3 +255,41 @@ class TestVNFReservationAlarmMonitor(testtools.TestCase):
response = test_vnf_reservation_monitor.update_vnf_with_reservation(
self.plugin, self.context, vnf, policy_dict)
self.assertEqual(len(response.keys()), 3)
class TestVNFMaintenanceAlarmMonitor(testtools.TestCase):
def setup(self):
super(TestVNFMaintenanceAlarmMonitor, self).setUp()
def test_process_alarm_for_vnf(self):
vnf = {'id': MOCK_VNF_ID}
trigger = {'params': {'data': {
'alarm_id': MOCK_VNF_ID, 'current': 'alarm'}}}
test_vnf_maintenance_monitor = monitor.VNFMaintenanceAlarmMonitor()
response = test_vnf_maintenance_monitor.process_alarm_for_vnf(
vnf, trigger)
self.assertEqual(response, True)
@mock.patch('tacker.db.common_services.common_services_db_plugin.'
'CommonServicesPluginDb.create_event')
def test_update_vnf_with_alarm(self, mock_db_service):
mock_db_service.return_value = {
'event_type': 'MONITOR',
'resource_id': '9770fa22-747d-426e-9819-057a95cb778c',
'timestamp': '2018-10-30 06:01:45.628162',
'event_details': {'Alarm URL set successfully': {
'start_actions': 'alarm'}},
'resource_state': 'CREATE',
'id': '4583',
'resource_type': 'vnf'}
vnf = {
'id': MOCK_VNF_ID,
'tenant_id': 'ad7ebc56538745a08ef7c5e97f8bd437',
'status': 'insufficient_data'}
vdu_names = ['VDU1']
test_vnf_maintenance_monitor = monitor.VNFMaintenanceAlarmMonitor()
response = test_vnf_maintenance_monitor.update_vnf_with_maintenance(
vnf, vdu_names)
result_keys = len(response) + len(response.get('vdus', {}))
self.assertEqual(result_keys, 4)

30
tacker/tests/unit/vnfm/test_plugin.py

@ -49,7 +49,12 @@ class FakeDriverManager(mock.Mock):
class FakeVNFMonitor(mock.Mock):
pass
def update_vnf_with_maintenance(self, vnf_dict, maintenance_vdus):
url = 'http://local:9890/v1.0/vnfs/%s/maintenance/%s' % (
vnf_dict['id'], vnf_dict['tenant_id'])
return {'url': url,
'vdus': {'ALL': 'ad7ebc56',
'VDU1': '538745a0'