Browse Source
Add fenix plugin for host maintenance. This feature creates plugin for fenix, create_vnf_maintenance() in VNFM and VNFMaintenanceAlarmMonitor to create alarm for Fenix. And the feature modifies alarm_receiver and CRUD in VNFM. After this feature, all VNF has ALL_MAINTENANCE resource to interacts with Fenix plugin and [VDU_NAME]_MAINTENANCE if VDU has maintenance property. [VDU_NAME]_MAINTENANCE will use to perform VNF software modification. Currently, the plugin can perform CRUD constraints for maintenance, scale in/out and migration for MIGRATE and LIVE_MIGRATE. The feature has functions for OWN_ACTION with modified healing, but it not works based on default vnf workflow in Fenix. And The feature doesn't support server_group and related with HA like switch over because of unsupported in Tacker. So these features will be enhance after adding required. Co-Authored-By: Hyunsik Yang <yangun@dcn.ssu.ac.kr> Implements: blueprint vnf-rolling-upgrade Change-Id: I34b82fd40830dd74d0f5ef24a60b3ff465cd4819changes/57/681157/21
28 changed files with 1321 additions and 35 deletions
@ -0,0 +1,183 @@
|
||||
.. |
||||
Copyright 2020 Distributed Cloud and Network (DCN) |
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may |
||||
not use this file except in compliance with the License. You may obtain |
||||
a copy of the License at |
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0 |
||||
|
||||
Unless required by applicable law or agreed to in writing, software |
||||
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
||||
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
||||
License for the specific language governing permissions and limitations |
||||
under the License. |
||||
|
||||
================================ |
||||
VNF zero impact host maintenance |
||||
================================ |
||||
|
||||
Tacker allows you to maintenance host with VNF zero impact. Maintenance |
||||
workflows will be performed in the ``Fenix`` service by creating a session |
||||
which can do scaling, migrating VNFs and patch hosts. |
||||
|
||||
|
||||
References |
||||
~~~~~~~~~~ |
||||
|
||||
- `Fenix <https://fenix.readthedocs.io/en/latest/>`_. |
||||
- `Fenix Configuration Guide <https://fenix.readthedocs.io/en/latest/configuration/dependencies.html>`_. |
||||
|
||||
Installation and configurations |
||||
------------------------------- |
||||
|
||||
1. You need Fenix, Ceilometer and Aodh OpenStack services. |
||||
|
||||
2. Modify the below configuration files: |
||||
|
||||
/etc/ceilometer/event_pipeline.yaml |
||||
|
||||
.. code-block:: yaml |
||||
|
||||
sinks: |
||||
- name: event_sink |
||||
publishers: |
||||
- panko:// |
||||
- notifier:// |
||||
- notifier://?topic=alarm.all |
||||
|
||||
/etc/ceilometer/event_definitions.yaml: |
||||
|
||||
.. code-block:: yaml |
||||
|
||||
- event_type: 'maintenance.scheduled' |
||||
traits: |
||||
service: |
||||
fields: payload.service |
||||
allowed_actions: |
||||
fields: payload.allowed_actions |
||||
instance_ids: |
||||
fields: payload.instance_ids |
||||
reply_url: |
||||
fields: payload.reply_url |
||||
state: |
||||
fields: payload.state |
||||
session_id: |
||||
fields: payload.session_id |
||||
actions_at: |
||||
fields: payload.actions_at |
||||
type: datetime |
||||
project_id: |
||||
fields: payload.project_id |
||||
reply_at: |
||||
fields: payload.reply_at |
||||
type: datetime |
||||
metadata: |
||||
fields: payload.metadata |
||||
- event_type: 'maintenance.host' |
||||
traits: |
||||
host: |
||||
fields: payload.host |
||||
project_id: |
||||
fields: payload.project_id |
||||
session_id: |
||||
fields: payload.session_id |
||||
state: |
||||
fields: payload.state |
||||
|
||||
|
||||
Deploying maintenance tosca template with tacker |
||||
------------------------------------------------ |
||||
|
||||
When template is normal |
||||
~~~~~~~~~~~~~~~~~~~~~~~ |
||||
|
||||
If ``Fenix`` service is enabled and maintenance event_types are defined, then |
||||
all VNF created by legacy VNFM will get ``ALL_MAINTENANCE`` resource in Stack. |
||||
|
||||
.. code-block:: yaml |
||||
|
||||
resources: |
||||
ALL_maintenance: |
||||
properties: |
||||
alarm_actions: |
||||
- http://openstack-master:9890/v1.0/vnfs/e8b9bec5-541b-492c-954e-cd4af71eda1f/maintenance/0cc65f4bba9c42bfadf4aebec6ae7348/hbyhgkav |
||||
event_type: maintenance.scheduled |
||||
type: OS::Aodh::EventAlarm |
||||
|
||||
When template has maintenance property |
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
||||
|
||||
If VDU in VNFD has maintenance property, then VNFM creates |
||||
``[VDU_NAME]_MAINTENANCE`` alarm resources and will be use for VNF software |
||||
modification later. This is not works yet. It will be updated. |
||||
|
||||
``Sample tosca-template``: |
||||
|
||||
.. code-block:: yaml |
||||
|
||||
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0 |
||||
|
||||
description: VNF TOSCA template with maintenance |
||||
|
||||
metadata: |
||||
template_name: sample-tosca-vnfd-maintenance |
||||
|
||||
topology_template: |
||||
node_templates: |
||||
VDU1: |
||||
type: tosca.nodes.nfv.VDU.Tacker |
||||
properties: |
||||
maintenance: True |
||||
image: cirros-0.4.0-x86_64-disk |
||||
capabilities: |
||||
nfv_compute: |
||||
properties: |
||||
disk_size: 1 GB |
||||
mem_size: 512 MB |
||||
num_cpus: 2 |
||||
|
||||
CP1: |
||||
type: tosca.nodes.nfv.CP.Tacker |
||||
properties: |
||||
management: true |
||||
order: 0 |
||||
anti_spoofing_protection: false |
||||
requirements: |
||||
- virtualLink: |
||||
node: VL1 |
||||
- virtualBinding: |
||||
node: VDU1 |
||||
|
||||
VL1: |
||||
type: tosca.nodes.nfv.VL |
||||
properties: |
||||
network_name: net_mgmt |
||||
vendor: Tacker |
||||
|
||||
policies: |
||||
- SP1: |
||||
type: tosca.policies.tacker.Scaling |
||||
properties: |
||||
increment: 1 |
||||
cooldown: 120 |
||||
min_instances: 1 |
||||
max_instances: 3 |
||||
default_instances: 2 |
||||
targets: [VDU1] |
||||
|
||||
|
||||
Configure maintenance constraints with config yaml |
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
||||
|
||||
When ``Fenix`` does maintenance, it requires some constraints for zero impact. |
||||
Like below config file, each VNF can set and update constraints. |
||||
|
||||
.. code-block:: yaml |
||||
|
||||
maintenance: |
||||
max_impacted_members: 1 |
||||
recovery_time: 60, |
||||
mitigation_type: True, |
||||
lead_time: 120, |
||||
migration_type: 'MIGRATE' |
@ -0,0 +1,34 @@
|
||||
- event_type: 'maintenance.scheduled' |
||||
traits: |
||||
service: |
||||
fields: payload.service |
||||
allowed_actions: |
||||
fields: payload.allowed_actions |
||||
instance_ids: |
||||
fields: payload.instance_ids |
||||
reply_url: |
||||
fields: payload.reply_url |
||||
state: |
||||
fields: payload.state |
||||
session_id: |
||||
fields: payload.session_id |
||||
actions_at: |
||||
fields: payload.actions_at |
||||
type: datetime |
||||
project_id: |
||||
fields: payload.project_id |
||||
reply_at: |
||||
fields: payload.reply_at |
||||
type: datetime |
||||
metadata: |
||||
fields: payload.metadata |
||||
- event_type: 'maintenance.host' |
||||
traits: |
||||
host: |
||||
fields: payload.host |
||||
project_id: |
||||
fields: payload.project_id |
||||
session_id: |
||||
fields: payload.session_id |
||||
state: |
||||
fields: payload.state |
@ -0,0 +1,456 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may |
||||
# not use this file except in compliance with the License. You may obtain |
||||
# a copy of the License at |
||||
# |
||||
# http://www.apache.org/licenses/LICENSE-2.0 |
||||
# |
||||
# Unless required by applicable law or agreed to in writing, software |
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
||||
# License for the specific language governing permissions and limitations |
||||
# under the License. |
||||
|
||||
import requests |
||||
import time |
||||
import yaml |
||||
|
||||
from oslo_config import cfg |
||||
from oslo_serialization import jsonutils |
||||
|
||||
from tacker.common import clients |
||||
from tacker.common import log |
||||
from tacker.extensions import vnfm |
||||
from tacker.plugins.common import constants |
||||
from tacker.vnfm import vim_client |
||||
|
||||
|
||||
CONF = cfg.CONF |
||||
OPTS = [ |
||||
cfg.IntOpt('lead_time', default=120, |
||||
help=_('Time for migration_type operation')), |
||||
cfg.IntOpt('max_interruption_time', default=120, |
||||
help=_('Time for how long live migration can take')), |
||||
cfg.IntOpt('recovery_time', default=2, |
||||
help=_('Time for migrated node could be fully running state')), |
||||
cfg.IntOpt('request_retries', |
||||
default=5, |
||||
help=_("Number of attempts to retry for request")), |
||||
cfg.IntOpt('request_retry_wait', |
||||
default=5, |
||||
help=_("Wait time (in seconds) between consecutive request")) |
||||
] |
||||
CONF.register_opts(OPTS, 'fenix') |
||||
MAINTENANCE_KEYS = ( |
||||
'instance_ids', 'session_id', 'state', 'reply_url' |
||||
) |
||||
MAINTENANCE_SUB_KEYS = { |
||||
'PREPARE_MAINTENANCE': [('allowed_actions', 'list'), |
||||
('instance_ids', 'list')], |
||||
'PLANNED_MAINTENANCE': [('allowed_actions', 'list'), |
||||
('instance_ids', 'list')] |
||||
} |
||||
|
||||
|
||||
def config_opts(): |
||||
return [('fenix', OPTS)] |
||||
|
||||
|
||||
class FenixPlugin(object): |
||||
def __init__(self): |
||||
self.REQUEST_RETRIES = cfg.CONF.fenix.request_retries |
||||
self.REQUEST_RETRY_WAIT = cfg.CONF.fenix.request_retry_wait |
||||
self.endpoint = None |
||||
self._instances = {} |
||||
self.vim_client = vim_client.VimClient() |
||||
|
||||
@log.log |
||||
def request(self, plugin, context, vnf_dict, maintenance={}, |
||||
data_func=None): |
||||
params_list = [maintenance] |
||||
method = 'put' |
||||
is_reply = True |
||||
if data_func: |
||||
action, create_func = data_func.split('_', 1) |
||||
create_func = '_create_%s_list' % create_func |
||||
if action in ['update', 'delete'] and hasattr(self, create_func): |
||||
params_list = getattr(self, create_func)( |
||||
context, vnf_dict, action) |
||||
method = action if action == 'delete' else 'put' |
||||
is_reply = False |
||||
for params in params_list: |
||||
self._request(plugin, context, vnf_dict, params, method, is_reply) |
||||
return len(params_list) |
||||
|
||||
@log.log |
||||
def create_vnf_constraints(self, plugin, context, vnf_dict): |
||||
self.update_vnf_constraints(plugin, context, vnf_dict, |
||||
objects=['instance_group', |
||||
'project_instance']) |
||||
|
||||
@log.log |
||||
def delete_vnf_constraints(self, plugin, context, vnf_dict): |
||||
self.update_vnf_constraints(plugin, context, vnf_dict, |
||||
action='delete', |
||||
objects=['instance_group', |
||||
'project_instance']) |
||||
|
||||
@log.log |
||||
def update_vnf_instances(self, plugin, context, vnf_dict, |
||||
action='update'): |
||||
requests = self.update_vnf_constraints(plugin, context, |
||||
vnf_dict, action, |
||||
objects=['project_instance']) |
||||
if requests[0]: |
||||
self.post(context, vnf_dict) |
||||
|
||||
@log.log |
||||
def update_vnf_constraints(self, plugin, context, vnf_dict, |
||||
action='update', objects=[]): |
||||
result = [] |
||||
for obj in objects: |
||||
requests = self.request(plugin, context, vnf_dict, |
||||
data_func='%s_%s' % (action, obj)) |
||||
result.append(requests) |
||||
return result |
||||
|
||||
@log.log |
||||
def post(self, context, vnf_dict, **kwargs): |
||||
post_function = getattr(context, 'maintenance_post_function', None) |
||||
if not post_function: |
||||
return |
||||
post_function(context, vnf_dict) |
||||
del context.maintenance_post_function |
||||
|
||||
@log.log |
||||
def project_instance_pre(self, context, vnf_dict): |
||||
key = vnf_dict['id'] |
||||
if key not in self._instances: |
||||
self._instances.update({ |
||||
key: self._get_instances(context, vnf_dict)}) |
||||
|
||||
@log.log |
||||
def validate_maintenance(self, maintenance): |
||||
body = maintenance['maintenance']['params']['data']['body'] |
||||
if not set(MAINTENANCE_KEYS).issubset(body) or \ |
||||
body['state'] not in constants.RES_EVT_MAINTENANCE: |
||||
raise vnfm.InvalidMaintenanceParameter() |
||||
sub_keys = MAINTENANCE_SUB_KEYS.get(body['state'], ()) |
||||
for key, val_type in sub_keys: |
||||
if key not in body or type(body[key]) is not eval(val_type): |
||||
raise vnfm.InvalidMaintenanceParameter() |
||||
return body |
||||
|
||||
@log.log |
||||
def _request(self, plugin, context, vnf_dict, maintenance, |
||||
method='put', is_reply=True): |
||||
client = self._get_openstack_clients(context, vnf_dict) |
||||
if not self.endpoint: |
||||
self.endpoint = client.keystone_session.get_endpoint( |
||||
service_type='maintenance', region_name=client.region_name) |
||||
if not self.endpoint: |
||||
raise vnfm.ServiceTypeNotFound(service_type_id='maintenance') |
||||
|
||||
if 'reply_url' in maintenance: |
||||
url = maintenance['reply_url'] |
||||
elif 'url' in maintenance: |
||||
url = "%s/%s" % (self.endpoint.rstrip('/'), |
||||
maintenance['url'].strip('/')) |
||||
else: |
||||
return |
||||
|
||||
def create_headers(): |
||||
return { |
||||
'X-Auth-Token': client.keystone_session.get_token(), |
||||
'Content-Type': 'application/json', |
||||
'Accept': 'application/json' |
||||
} |
||||
|
||||
request_body = {} |
||||
request_body['headers'] = create_headers() |
||||
state = constants.ACK if vnf_dict['status'] == constants.ACTIVE \ |
||||
else constants.NACK |
||||
if method == 'put': |
||||
data = maintenance.get('data', {}) |
||||
if is_reply: |
||||
data['session_id'] = maintenance.get('session_id', '') |
||||
data['state'] = "%s_%s" % (state, maintenance['state']) |
||||
request_body['data'] = jsonutils.dump_as_bytes(data) |
||||
|
||||
def request_wait(): |
||||
retries = self.REQUEST_RETRIES |
||||
while retries > 0: |
||||
response = getattr(requests, method)(url, **request_body) |
||||
if response.status_code == 200: |
||||
break |
||||
else: |
||||
retries -= 1 |
||||
time.sleep(self.REQUEST_RETRY_WAIT) |
||||
|
||||
plugin.spawn_n(request_wait) |
||||
|
||||
@log.log |
||||
def handle_maintenance(self, plugin, context, maintenance): |
||||
action = '_create_%s' % maintenance['state'].lower() |
||||
maintenance['data'] = {} |
||||
if hasattr(self, action): |
||||
getattr(self, action)(plugin, context, maintenance) |
||||
|
||||
@log.log |
||||
def _create_maintenance(self, plugin, context, maintenance): |
||||
vnf_dict = maintenance.get('vnf', {}) |
||||
vnf_dict['attributes'].update({'maintenance_scaled': 0}) |
||||
plugin._update_vnf_post(context, vnf_dict['id'], constants.ACTIVE, |
||||
vnf_dict, constants.ACTIVE, |
||||
constants.RES_EVT_UPDATE) |
||||
instances = self._get_instances(context, vnf_dict) |
||||
instance_ids = [x['id'] for x in instances] |
||||
maintenance['data'].update({'instance_ids': instance_ids}) |
||||
|
||||
@log.log |
||||
def _create_scale_in(self, plugin, context, maintenance): |
||||
def post_function(context, vnf_dict): |
||||
scaled = int(vnf_dict['attributes'].get('maintenance_scaled', 0)) |
||||
vnf_dict['attributes']['maintenance_scaled'] = str(scaled + 1) |
||||
plugin._update_vnf_post(context, vnf_dict['id'], constants.ACTIVE, |
||||
vnf_dict, constants.ACTIVE, |
||||
constants.RES_EVT_UPDATE) |
||||
instances = self._get_instances(context, vnf_dict) |
||||
instance_ids = [x['id'] for x in instances] |
||||
maintenance['data'].update({'instance_ids': instance_ids}) |
||||
self.request(plugin, context, vnf_dict, maintenance) |
||||
|
||||
vnf_dict = maintenance.get('vnf', {}) |
||||
policy_action = self._create_scale_dict(plugin, context, vnf_dict) |
||||
if policy_action: |
||||
maintenance.update({'policy_action': policy_action}) |
||||
context.maintenance_post_function = post_function |
||||
|
||||
@log.log |
||||
def _create_prepare_maintenance(self, plugin, context, maintenance): |
||||
self._create_planned_maintenance(plugin, context, maintenance) |
||||
|
||||
@log.log |
||||
def _create_planned_maintenance(self, plugin, context, maintenance): |
||||
def post_function(context, vnf_dict): |
||||
migration_type = self._get_constraints(vnf_dict, |
||||
key='migration_type', |
||||
default='MIGRATE') |
||||
maintenance['data'].update({'instance_action': migration_type}) |
||||
self.request(plugin, context, vnf_dict, maintenance) |
||||
|
||||
vnf_dict = maintenance.get('vnf', {}) |
||||
instances = self._get_instances(context, vnf_dict) |
||||
request_instance_id = maintenance['instance_ids'][0] |
||||
selected = None |
||||
for instance in instances: |
||||
if instance['id'] == request_instance_id: |
||||
selected = instance |
||||
break |
||||
if not selected: |
||||
vnfm.InvalidMaintenanceParameter() |
||||
|
||||
migration_type = self._get_constraints(vnf_dict, key='migration_type', |
||||
default='MIGRATE') |
||||
if migration_type == 'OWN_ACTION': |
||||
policy_action = self._create_migrate_dict(context, vnf_dict, |
||||
selected) |
||||
maintenance.update({'policy_action': policy_action}) |
||||
context.maintenance_post_function = post_function |
||||
else: |
||||
post_function(context, vnf_dict) |
||||
|
||||
@log.log |
||||
def _create_maintenance_complete(self, plugin, context, maintenance): |
||||
def post_function(context, vnf_dict): |
||||
vim_res = self.vim_client.get_vim(context, vnf_dict['vim_id']) |
||||
scaled = int(vnf_dict['attributes'].get('maintenance_scaled', 0)) |
||||
if vim_res['vim_type'] == 'openstack': |
||||
scaled -= 1 |
||||
vnf_dict['attributes']['maintenance_scaled'] = str(scaled) |
||||
plugin._update_vnf_post(context, vnf_dict['id'], |
||||
constants.ACTIVE, vnf_dict, |
||||
constants.ACTIVE, |
||||
constants.RES_EVT_UPDATE) |
||||
if scaled > 0: |
||||
scale_out(plugin, context, vnf_dict) |
||||
else: |
||||
instances = self._get_instances(context, vnf_dict) |
||||
instance_ids = [x['id'] for x in instances] |
||||
maintenance['data'].update({'instance_ids': instance_ids}) |
||||
self.request(plugin, context, vnf_dict, maintenance) |
||||
|
||||
def scale_out(plugin, context, vnf_dict): |
||||
policy_action = self._create_scale_dict(plugin, context, vnf_dict, |
||||
scale_type='out') |
||||
context.maintenance_post_function = post_function |
||||
plugin._vnf_action.invoke(policy_action['action'], |
||||
'execute_action', plugin=plugin, |
||||
context=context, vnf_dict=vnf_dict, |
||||
args=policy_action['args']) |
||||
|
||||
vnf_dict = maintenance.get('vnf', {}) |
||||
scaled = vnf_dict.get('attributes', {}).get('maintenance_scaled', 0) |
||||
if int(scaled): |
||||
policy_action = self._create_scale_dict(plugin, context, vnf_dict, |
||||
scale_type='out') |
||||
maintenance.update({'policy_action': policy_action}) |
||||
context.maintenance_post_function = post_function |
||||
|
||||
@log.log |
||||
def _create_scale_dict(self, plugin, context, vnf_dict, scale_type='in'): |
||||
policy_action, scale_dict = {}, {} |
||||
policies = self._get_scaling_policies(plugin, context, vnf_dict) |
||||
if not policies: |
||||
return |
||||
scale_dict['type'] = scale_type |
||||
scale_dict['policy'] = policies[0]['name'] |
||||
policy_action['action'] = 'autoscaling' |
||||
policy_action['args'] = {'scale': scale_dict} |
||||
return policy_action |
||||
|
||||
@log.log |
||||
def _create_migrate_dict(self, context, vnf_dict, instance): |
||||
policy_action, heal_dict = {}, {} |
||||
heal_dict['vdu_name'] = instance['name'] |
||||
heal_dict['cause'] = ["Migrate resource '%s' to other host."] |
||||
heal_dict['stack_id'] = instance['stack_name'] |
||||
if 'scaling_group_names' in vnf_dict['attributes']: |
||||
sg_names = vnf_dict['attributes']['scaling_group_names'] |
||||
sg_names = list(jsonutils.loads(sg_names).keys()) |
||||
heal_dict['heat_tpl'] = '%s_res.yaml' % sg_names[0] |
||||
policy_action['action'] = 'vdu_autoheal' |
||||
policy_action['args'] = heal_dict |
||||
return policy_action |
||||
|
||||
@log.log |
||||
def _create_instance_group_list(self, context, vnf_dict, action): |
||||
group_id = vnf_dict['attributes'].get('maintenance_group', '') |
||||
if not group_id: |
||||
return |
||||
|
||||
def get_constraints(data): |
||||
maintenance_config = self._get_constraints(vnf_dict) |
||||
data['max_impacted_members'] = maintenance_config.get( |
||||
'max_impacted_members', 1) |
||||
data['recovery_time'] = maintenance_config.get('recovery_time', 60) |
||||
|
||||
params, data = {}, {} |
||||
params['url'] = '/instance_group/%s' % group_id |
||||
if action == 'update': |
||||
data['group_id'] = group_id |
||||
data['project_id'] = vnf_dict['tenant_id'] |
||||
data['group_name'] = 'tacker_nonha_app_group_%s' % vnf_dict['id'] |
||||
data['anti_affinity_group'] = False |
||||
data['max_instances_per_host'] = 0 |
||||
data['resource_mitigation'] = True |
||||
get_constraints(data) |
||||
params.update({'data': data}) |
||||
return [params] |
||||
|
||||
@log.log |
||||
def _create_project_instance_list(self, context, vnf_dict, action): |
||||
group_id = vnf_dict.get('attributes', {}).get('maintenance_group', '') |
||||
if not group_id: |
||||
return |
||||
|
||||
params_list = [] |
||||
url = '/instance' |
||||
instances = self._get_instances(context, vnf_dict) |
||||
_instances = self._instances.get(vnf_dict['id'], {}) |
||||
if _instances: |
||||
if action == 'update': |
||||
instances = [v for v in instances if v not in _instances] |
||||
del self._instances[vnf_dict['id']] |
||||
else: |
||||
instances = [v for v in _instances if v not in instances] |
||||
if len(instances) != len(_instances): |
||||
del self._instances[vnf_dict['id']] |
||||
|
||||
if action == 'update': |
||||
maintenance_configs = self._get_constraints(vnf_dict) |
||||
for instance in instances: |
||||
params, data = {}, {} |
||||
params['url'] = '%s/%s' % (url, instance['id']) |
||||
data['project_id'] = instance['project_id'] |
||||
data['instance_id'] = instance['id'] |
||||
data['instance_name'] = instance['name'] |
||||
data['migration_type'] = maintenance_configs.get( |
||||
'migration_type', 'MIGRATE') |
||||
data['resource_mitigation'] = maintenance_configs.get( |
||||
'mitigation_type', True) |
||||
data['max_interruption_time'] = maintenance_configs.get( |
||||
'max_interruption_time', |
||||
cfg.CONF.fenix.max_interruption_time) |
||||
data['lead_time'] = maintenance_configs.get( |
||||
'lead_time', cfg.CONF.fenix.lead_time) |
||||
data['group_id'] = group_id |
||||
params.update({'data': data}) |
||||
params_list.append(params) |
||||
elif action == 'delete': |
||||
for instance in instances: |
||||
params = {} |
||||
params['url'] = '%s/%s' % (url, instance['id']) |
||||
params_list.append(params) |
||||
return params_list |
||||
|
||||
@log.log |
||||
def _get_instances(self, context, vnf_dict): |
||||
vim_res = self.vim_client.get_vim(context, vnf_dict['vim_id']) |
||||
action = '_get_instances_with_%s' % vim_res['vim_type'] |
||||
if hasattr(self, action): |
||||
return getattr(self, action)(context, vnf_dict) |
||||
return {} |
||||
|
||||
@log.log |
||||
def _get_instances_with_openstack(self, context, vnf_dict): |
||||
def get_attrs_with_link(links): |
||||
attrs = {} |
||||
for link in links: |
||||
href, rel = link['href'], link['rel'] |
||||
if rel == 'self': |
||||
words = href.split('/') |
||||
attrs['project_id'] = words[5] |
||||
attrs['stack_name'] = words[7] |
||||
break |
||||
return attrs |
||||
|
||||
instances = [] |
||||
client = self._get_openstack_clients(context, vnf_dict) |
||||
resources = client.heat.resources.list(vnf_dict['instance_id'], |
||||
nested_depth=2) |
||||
for resource in resources: |
||||
if resource.resource_type == 'OS::Nova::Server' and \ |
||||
resource.resource_status != 'DELETE_IN_PROGRESS': |
||||
instance = { |
||||
'id': resource.physical_resource_id, |
||||
'name': resource.resource_name |
||||
} |
||||
instance.update(get_attrs_with_link(resource.links)) |
||||
instances.append(instance) |
||||
return instances |
||||
|
||||
@log.log |
||||
def _get_scaling_policies(self, plugin, context, vnf_dict): |
||||
vnf_id = vnf_dict['id'] |
||||
policies = [] |
||||
if 'scaling_group_names' in vnf_dict['attributes']: |
||||
policies = plugin.get_vnf_policies( |
||||
context, vnf_id, filters={'type': constants.POLICY_SCALING}) |
||||
return policies |
||||
|
||||
@log.log |
||||
def _get_constraints(self, vnf, key=None, default=None): |
||||
config = vnf.get('attributes', {}).get('config', '{}') |
||||
maintenance_config = yaml.safe_load(config).get('maintenance', {}) |
||||
if key: |
||||
return maintenance_config.get(key, default) |
||||
return maintenance_config |
||||
|
||||
@log.log |
||||
def _get_openstack_clients(self, context, vnf_dict): |
||||
vim_res = self.vim_client.get_vim(context, vnf_dict['vim_id']) |
||||
region_name = vnf_dict.setdefault('placement_attr', {}).get( |
||||
'region_name', None) |
||||
client = clients.OpenstackClients(auth_attr=vim_res['vim_auth'], |
||||
region_name=region_name) |
||||
return client |
@ -0,0 +1,51 @@
|
||||
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0 |
||||
|
||||
description: Maintenance VNF with Fenix |
||||
|
||||
metadata: |
||||
template_name: tosca-vnfd-maintenance |
||||
|
||||
topology_template: |
||||
node_templates: |
||||
VDU1: |
||||
capabilities: |
||||
nfv_compute: |
||||
properties: |
||||
disk_size: 15 GB |
||||
mem_size: 2048 MB |
||||
num_cpus: 2 |
||||
properties: |
||||
availability_zone: nova |
||||
image: cirros-0.4.0-x86_64-disk |
||||
maintenance: true |
||||
mgmt_driver: noop |
||||
type: tosca.nodes.nfv.VDU.Tacker |
||||
|
||||
CP11: |
||||
properties: |
||||
anti_spoofing_protection: false |
||||
management: true |
||||
order: 0 |
||||
requirements: |
||||
- virtualLink: |
||||
node: VL1 |
||||
- virtualBinding: |
||||
node: VDU1 |
||||
type: tosca.nodes.nfv.CP.Tacker |
||||
|
||||
VL1: |
||||
properties: |
||||
network_name: net_mgmt |
||||
vendor: Tacker |
||||
type: tosca.nodes.nfv.VL |
||||
policies: |
||||
- SP1: |
||||
properties: |
||||
cooldown: 120 |
||||
default_instances: 3 |
||||
increment: 1 |
||||
max_instances: 3 |
||||
min_instances: 1 |
||||
targets: |
||||
- VDU1 |
||||
type: tosca.policies.tacker.Scaling |
@ -0,0 +1,194 @@
|
||||
# Copyright 2020 Distributed Cloud and Network (DCN) |
||||
# |
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may |
||||
# not use this file except in compliance with the License. You may obtain |
||||
# a copy of the License at |
||||
# |
||||
# http://www.apache.org/licenses/LICENSE-2.0 |
||||
# |
||||
# Unless required by applicable law or agreed to in writing, software |
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
||||
# License for the specific language governing permissions and limitations |
||||
# under the License. |
||||
|
||||
from datetime import datetime |
||||
import time |
||||
import yaml |
||||
|
||||
from oslo_serialization import jsonutils |
||||
from oslo_utils import uuidutils |
||||
|
||||
from tacker.plugins.common import constants as evt_constants |
||||
from tacker.tests import constants |
||||
from tacker.tests.functional import base |
||||
from tacker.tests.utils import read_file |
||||
|
||||
|
||||
class VnfTestMaintenanceMonitor(base.BaseTackerTest): |
||||
|
||||
def _test_vnf_tosca_maintenance(self, vnfd_file, vnf_name): |
||||
input_yaml = read_file(vnfd_file) |
||||
tosca_dict = yaml.safe_load(input_yaml) |
||||
tosca_arg = {'vnfd': {'name': vnf_name, |
||||
'attributes': {'vnfd': tosca_dict}}} |
||||
|
||||
# Create vnfd with tosca template |
||||
vnfd_instance = self.client.create_vnfd(body=tosca_arg) |
||||
self.assertIsNotNone(vnfd_instance) |
||||
|
||||
# Create vnf with vnfd_id |
||||
vnfd_id = vnfd_instance['vnfd']['id'] |
||||
vnf_arg = {'vnf': {'vnfd_id': vnfd_id, 'name': vnf_name}} |
||||
vnf_instance = self.client.create_vnf(body=vnf_arg) |
||||
vnf_id = vnf_instance['vnf']['id'] |
||||
|
||||
self.validate_vnf_instance(vnfd_instance, vnf_instance) |
||||
|
||||
def _wait_vnf_active_and_assert_vdu_count(vdu_count, scale_type=None): |
||||
self.wait_until_vnf_active( |
||||
vnf_id, |
||||
constants.VNF_CIRROS_CREATE_TIMEOUT, |
||||
constants.ACTIVE_SLEEP_TIME) |
||||
|
||||
vnf = self.client.show_vnf(vnf_id)['vnf'] |
||||
self.assertEqual(vdu_count, len(jsonutils.loads( |
||||
vnf['mgmt_ip_address'])['VDU1'])) |
||||
|
||||
def _verify_maintenance_attributes(vnf_dict): |
||||
vnf_attrs = vnf_dict.get('attributes', {}) |
||||
maintenance_vdus = vnf_attrs.get('maintenance', '{}') |
||||
maintenance_vdus = jsonutils.loads(maintenance_vdus) |
||||
maintenance_url = vnf_attrs.get('maintenance_url', '') |
||||
words = maintenance_url.split('/') |
||||
|
||||
self.assertEqual(len(maintenance_vdus.keys()), 2) |
||||
self.assertEqual(len(words), 8) |
||||
self.assertEqual(words[5], vnf_dict['id']) |
||||
self.assertEqual(words[7], vnf_dict['tenant_id']) |
||||
|
||||
maintenance_urls = {} |
||||
for vdu, access_key in maintenance_vdus.items(): |
||||
maintenance_urls[vdu] = maintenance_url + '/' + access_key |
||||
return maintenance_urls |
||||
|
||||
def _verify_maintenance_alarm(url, project_id): |
||||
aodh_client = self.aodh_http_client() |
||||
alarm_query = { |
||||
'and': [ |
||||
{'=': {'project_id': project_id}}, |
||||
{'=~': {'alarm_actions': url}}]} |
||||
|
||||
# Check alarm instance for MAINTENANCE_ALL |
||||
alarm_url = 'v2/query/alarms' |
||||
encoded_data = jsonutils.dumps(alarm_query) |
||||
encoded_body = jsonutils.dumps({'filter': encoded_data}) |
||||
resp, response_body = aodh_client.do_request(alarm_url, 'POST', |
||||
body=encoded_body) |
||||
self.assertEqual(len(response_body), 1) |
||||
alarm_dict = response_body[0] |
||||
self.assertEqual(url, alarm_dict.get('alarm_actions', [])[0]) |
||||
return response_body[0] |
||||
|
||||
def _verify_maintenance_actions(vnf_dict, alarm_dict): |
||||
tacker_client = self.tacker_http_client() |
||||
alarm_url = alarm_dict.get('alarm_actions', [])[0] |
||||
tacker_url = '/%s' % alarm_url[alarm_url.find('v1.0'):] |
||||
|
||||
def _request_maintenance_action(state): |
||||
alarm_body = _create_alarm_data(vnf_dict, alarm_dict, state) |
||||
resp, response_body = tacker_client.do_request( |
||||
tacker_url, 'POST', body=alarm_body) |
||||
|
||||
time.sleep(constants.SCALE_SLEEP_TIME) |
||||
target_scaled = -1 |
||||
if state == 'SCALE_IN': |
||||
target_scaled = 1 |
||||
_wait_vnf_active_and_assert_vdu_count(2, scale_type='in') |
||||
elif state == 'MAINTENANCE_COMPLETE': |
||||
target_scaled = 0 |
||||
_wait_vnf_active_and_assert_vdu_count(3, scale_type='out') |
||||
|
||||
updated_vnf = self.client.show_vnf(vnf_id)['vnf'] |
||||
scaled = updated_vnf['attributes'].get('maintenance_scaled', |
||||
'-1') |
||||
self.assertEqual(int(scaled), target_scaled) |
||||
time.sleep(constants.SCALE_WINDOW_SLEEP_TIME) |
||||
|
||||
time.sleep(constants.SCALE_WINDOW_SLEEP_TIME) |
||||
_request_maintenance_action('SCALE_IN') |
||||
_request_maintenance_action('MAINTENANCE_COMPLETE') |
||||
|
||||
self.verify_vnf_crud_events( |
||||
vnf_id, evt_constants.RES_EVT_SCALE, |
||||
evt_constants.ACTIVE, cnt=2) |
||||
self.verify_vnf_crud_events( |
||||
vnf_id, evt_constants.RES_EVT_SCALE, |
||||
evt_constants.PENDING_SCALE_OUT, cnt=1) |
||||
self.verify_vnf_crud_events( |
||||
vnf_id, evt_constants.RES_EVT_SCALE, |
||||
evt_constants.PENDING_SCALE_IN, cnt=1) |
||||
|
||||
def _create_alarm_data(vnf_dict, alarm_dict, state): |
||||
'''This function creates a raw payload of alarm to trigger Tacker directly. |
||||
|
||||
This function creates a raw payload which Fenix will put |
||||
when Fenix process maintenance procedures. Alarm_receiver and |
||||
specific steps of Fenix workflow will be tested by sending the raw |
||||
to Tacker directly. |
||||
''' |
||||
utc_time = datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%SZ') |
||||
fake_url = 'http://localhost/' |
||||
sample_data = { |
||||
'alarm_name': alarm_dict['name'], |
||||
'alarm_id': alarm_dict['alarm_id'], |
||||
'severity': 'low', |
||||
'previous': 'alarm', |
||||
'current': 'alarm', |
||||
'reason': 'Alarm test for Tacker functional test', |
||||
'reason_data': { |
||||
'type': 'event', |
||||
'event': { |
||||
'message_id': uuidutils.generate_uuid(), |
||||
'event_type': 'maintenance.scheduled', |
||||
'generated': utc_time, |
||||
'traits': [ |
||||
['project_id', 1, vnf_dict['tenant_id']], |
||||
['allowed_actions', 1, '[]'], |
||||
['instance_ids', 1, fake_url], |
||||
['reply_url', 1, fake_url], |
||||
['state', 1, state], |
||||
['session_id', 1, uuidutils.generate_uuid()], |
||||
['actions_at', 4, utc_time], |
||||
['reply_at', 4, utc_time], |
||||
['metadata', 1, '{}'] |
||||
], |
||||
'raw': {}, |
||||
'message_signature': uuidutils.generate_uuid() |
||||
} |
||||
} |
||||
} |
||||
return jsonutils.dumps(sample_data) |
||||
|
||||
_wait_vnf_active_and_assert_vdu_count(3) |
||||
urls = _verify_maintenance_attributes(vnf_instance['vnf']) |
||||
|
||||
maintenance_url = urls.get('ALL', '') |
||||
project_id = vnf_instance['vnf']['tenant_id'] |
||||
alarm_dict = _verify_maintenance_alarm(maintenance_url, project_id) |
||||
_verify_maintenance_actions(vnf_instance['vnf'], alarm_dict) |
||||
|
||||
try: |
||||
self.client.delete_vnf(vnf_id) |
||||
except Exception: |
||||
assert False, ( |
||||
'Failed to delete vnf %s after the maintenance test' % vnf_id) |
||||
self.addCleanup(self.client.delete_vnfd, vnfd_id) |
||||
self.addCleanup(self.wait_until_vnf_delete, vnf_id, |
||||
constants.VNF_CIRROS_DELETE_TIMEOUT) |
||||
|
||||
def test_vnf_alarm_maintenance(self): |
||||
# instance_maintenance = self._get_instance_maintenance() |
||||
self._test_vnf_tosca_maintenance( |
||||
'sample-tosca-vnfd-maintenance.yaml', |
||||
'maintenance_vnf') |