Support feature to attach the existing volume
This patch attaches already created volume to vnf using volume id. The existing volume id can be added to tosca.nodes.BlockStorage node, in this case, no new volume will be created while vnf deployment. Implementation: * Set tosca.nodes.BlockStorage.Tacker "size" required as false, this override property definition mention in tosca.nodes.Storage.BlockStorage present in tosca parser. Please refer [1] for tosca.nodes.Storage.BlockStorage details. * Add a local dictionary to store volume id present in tosca.nodes.BlockStorage.Tacker. This dictionary is use in to determine cinder volume mapping. Additionally update block_storage_usage_guide.rst with changes required to attach existing volume. Added cinderclient in test_requirement for functional test case. Add release notes for feature. [1]: http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-YAML/v1.2/csprd01/TOSCA-Simple-Profile-YAML-v1.2-csprd01.html Change-Id: If5d386a64f98603de843f96287c1e296ae6a2e1f Implements: blueprint attach-existing-volume
This commit is contained in:
parent
971d949da7
commit
4819b827c8
@ -17,22 +17,22 @@
|
||||
Orchestrating VNFs with attached Volumes
|
||||
=========================================
|
||||
|
||||
To support persistent volumes to VNF, TOSCA NFV profile supports new type
|
||||
of nodes. Tacker has now feature of parsing of those new nodes and creation
|
||||
To support persistent volumes to VNF, the TOSCA NFV profile supports a new type
|
||||
of nodes. Tacker has now the feature of parsing of those new nodes and creation
|
||||
of cinder volumes which are attached to the VDUs.
|
||||
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
To have persistent volume support to VDUs, we must enable cinder service in
|
||||
addition to the other services that needed by Tacker.
|
||||
addition to the other services needed by Tacker.
|
||||
|
||||
VNFD Changes
|
||||
~~~~~~~~~~~~
|
||||
|
||||
There are two steps to have volume attached to VDU:
|
||||
|
||||
* Create volume
|
||||
* Create volume or Use an existing volume.
|
||||
* Attach Volume to VDU
|
||||
|
||||
Create Volume
|
||||
@ -47,6 +47,25 @@ To add volume, we need to add the below node to the VNFD:
|
||||
properties:
|
||||
size: 1 GB
|
||||
|
||||
Use Existing Volume
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
We can also attach an already created/existing volume with VNF by providing
|
||||
``volume_id`` in input.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
topology_template:
|
||||
inputs:
|
||||
my_vol:
|
||||
description: volume id
|
||||
type: string
|
||||
|
||||
VB1:
|
||||
type: tosca.nodes.BlockStorage.Tacker
|
||||
properties:
|
||||
volume_id: my_vol
|
||||
|
||||
Attach volume to VDU
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
Next attach the created volume to VDU as below:
|
||||
|
@ -113,6 +113,7 @@ python-novaclient==9.1.0
|
||||
python-subunit==1.0.0
|
||||
python-swiftclient==3.5.0
|
||||
python-tackerclient==0.8.0
|
||||
python-cinderclient==3.3.0
|
||||
pytz==2018.3
|
||||
PyYAML==5.1
|
||||
repoze.lru==0.7
|
||||
|
@ -0,0 +1,11 @@
|
||||
---
|
||||
prelude: >
|
||||
This release contains a new feature to attach an already created
|
||||
or existing volume with VNF. Enhancement to an existing feature
|
||||
that supports attachment of persistent volumes to VNF.
|
||||
|
||||
features:
|
||||
- |
|
||||
This feature allows users to attach an already created or existing
|
||||
volume with VNF by providing volume_id in the TOSCA template.
|
||||
In this case, no new volume is created while vnf deployment.
|
@ -25,6 +25,7 @@ SCALE_SLEEP_TIME = 30
|
||||
NS_CREATE_TIMEOUT = 400
|
||||
NS_DELETE_TIMEOUT = 300
|
||||
NOVA_CLIENT_VERSION = 2
|
||||
CINDER_CLIENT_VERSION = 3
|
||||
VDU_MARK_UNHEALTHY_TIMEOUT = 500
|
||||
VDU_MARK_UNHEALTHY_SLEEP_TIME = 3
|
||||
VDU_AUTOHEALING_TIMEOUT = 500
|
||||
|
@ -0,0 +1,63 @@
|
||||
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
|
||||
|
||||
description: Demo example
|
||||
|
||||
metadata:
|
||||
template_name: sample-tosca-vnfd
|
||||
|
||||
topology_template:
|
||||
inputs:
|
||||
my_vol:
|
||||
default: 0dbf28ba-d0b7-4369-99ce-7a3c31dc996f
|
||||
description: volume id
|
||||
type: string
|
||||
node_templates:
|
||||
VDU1:
|
||||
type: tosca.nodes.nfv.VDU.Tacker
|
||||
capabilities:
|
||||
nfv_compute:
|
||||
properties:
|
||||
num_cpus: 1
|
||||
mem_size: 512 MB
|
||||
disk_size: 1 GB
|
||||
properties:
|
||||
name: test-vdu-block-storage
|
||||
image: cirros-0.4.0-x86_64-disk
|
||||
availability_zone: nova
|
||||
mgmt_driver: noop
|
||||
config: |
|
||||
param0: key1
|
||||
param1: key2
|
||||
|
||||
CP1:
|
||||
type: tosca.nodes.nfv.CP.Tacker
|
||||
properties:
|
||||
name: test-cp
|
||||
management: true
|
||||
order: 0
|
||||
anti_spoofing_protection: false
|
||||
requirements:
|
||||
- virtualLink:
|
||||
node: VL1
|
||||
- virtualBinding:
|
||||
node: VDU1
|
||||
|
||||
VB1:
|
||||
type: tosca.nodes.BlockStorage.Tacker
|
||||
properties:
|
||||
volume_id: my_vol
|
||||
|
||||
CB1:
|
||||
type: tosca.nodes.BlockStorageAttachment
|
||||
properties:
|
||||
location: /dev/vdb
|
||||
requirements:
|
||||
- virtualBinding:
|
||||
node: VDU1
|
||||
- virtualAttachment:
|
||||
node: VB1
|
||||
VL1:
|
||||
type: tosca.nodes.nfv.VL
|
||||
properties:
|
||||
network_name: net_mgmt
|
||||
vendor: Tacker
|
@ -16,6 +16,7 @@ import time
|
||||
import yaml
|
||||
|
||||
from blazarclient import client as blazar_client
|
||||
from cinderclient import client as cinder_client
|
||||
from glanceclient.v2 import client as glance_client
|
||||
from keystoneauth1.identity import v3
|
||||
from keystoneauth1 import session
|
||||
@ -98,6 +99,7 @@ class BaseTackerTest(base.BaseTestCase):
|
||||
cls.http_client = cls.tacker_http_client()
|
||||
cls.h_client = cls.heatclient()
|
||||
cls.glance_client = cls.glanceclient()
|
||||
cls.cinder_client = cls.cinderclient()
|
||||
|
||||
@classmethod
|
||||
def get_credentials(cls):
|
||||
@ -202,6 +204,20 @@ class BaseTackerTest(base.BaseTestCase):
|
||||
service_type='alarming',
|
||||
region_name='RegionOne')
|
||||
|
||||
@classmethod
|
||||
def cinderclient(cls):
|
||||
vim_params = cls.get_credentials()
|
||||
auth = v3.Password(auth_url=vim_params['auth_url'],
|
||||
username=vim_params['username'],
|
||||
password=vim_params['password'],
|
||||
project_name=vim_params['project_name'],
|
||||
user_domain_name=vim_params['user_domain_name'],
|
||||
project_domain_name=vim_params['project_domain_name'])
|
||||
verify = 'True' == vim_params.pop('cert_verify', 'False')
|
||||
auth_ses = session.Session(auth=auth, verify=verify)
|
||||
return cinder_client.Client(constants.CINDER_CLIENT_VERSION,
|
||||
session=auth_ses)
|
||||
|
||||
def get_vdu_resource(self, stack_id, res_name):
|
||||
return self.h_client.resources.get(stack_id, res_name)
|
||||
|
||||
@ -380,9 +396,29 @@ class BaseTackerTest(base.BaseTestCase):
|
||||
"Key %(key)s expected: %(exp)r, actual %(act)r" %
|
||||
{'key': k, 'exp': v, 'act': actual_superset[k]})
|
||||
|
||||
def vnfd_and_vnf_create(self, vnfd_file, vnf_name):
|
||||
def create_cinder_volume(cls, vol_size, vol_name):
|
||||
try:
|
||||
cinder_volume = cls.cinder_client.volumes.create(vol_size,
|
||||
name=vol_name)
|
||||
except Exception as e:
|
||||
LOG.error("Failed to create cinder volume: %s", str(e))
|
||||
return None
|
||||
|
||||
return cinder_volume.id
|
||||
|
||||
def delete_cinder_volume(cls, vol_id):
|
||||
try:
|
||||
cls.cinder_client.volumes.delete(vol_id)
|
||||
except Exception as e:
|
||||
LOG.error("Failed to delete cinder volume: %s", str(e))
|
||||
|
||||
def vnfd_and_vnf_create(self, vnfd_file, vnf_name, volume_id=None,
|
||||
volume_name=None):
|
||||
input_yaml = read_file(vnfd_file)
|
||||
tosca_dict = yaml.safe_load(input_yaml)
|
||||
if volume_id is not None:
|
||||
volume_detail = tosca_dict['topology_template']['inputs']
|
||||
volume_detail[volume_name]['default'] = volume_id
|
||||
tosca_arg = {'vnfd': {'name': vnf_name,
|
||||
'attributes': {'vnfd': tosca_dict}}}
|
||||
|
||||
|
@ -0,0 +1,118 @@
|
||||
# Copyright 2021 NEC, Corp.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import yaml
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from tacker.plugins.common import constants as evt_constants
|
||||
from tacker.tests import constants
|
||||
from tacker.tests.functional import base
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
VNF_CIRROS_CREATE_TIMEOUT = 120
|
||||
|
||||
|
||||
class VnfExistingBlockStorageTestToscaCreate(base.BaseTackerTest):
|
||||
|
||||
def _test_create_vnf(self, vnfd_file, vnf_name, volume_id, volume_name,
|
||||
template_source="onboarded"):
|
||||
|
||||
if template_source == "onboarded":
|
||||
(vnfd_instance,
|
||||
vnf_instance,
|
||||
tosca_dict) = self.vnfd_and_vnf_create(vnfd_file,
|
||||
vnf_name, volume_id, volume_name)
|
||||
|
||||
vnfd_id = vnf_instance['vnf']['vnfd_id']
|
||||
vnf_id = vnf_instance['vnf']['id']
|
||||
self.wait_until_vnf_active(
|
||||
vnf_id,
|
||||
constants.VNF_CIRROS_CREATE_TIMEOUT,
|
||||
constants.ACTIVE_SLEEP_TIME)
|
||||
vnf_show_out = self.client.show_vnf(vnf_id)['vnf']
|
||||
self.assertIsNotNone(vnf_show_out['mgmt_ip_address'])
|
||||
|
||||
prop_dict = tosca_dict['topology_template']['node_templates'][
|
||||
'CP1']['properties']
|
||||
|
||||
# Verify if ip_address is static, it is same as in show_vnf
|
||||
if prop_dict.get('ip_address'):
|
||||
mgmt_ip_address_input = prop_dict.get('ip_address')
|
||||
mgmt_info = yaml.safe_load(
|
||||
vnf_show_out['mgmt_ip_address'])
|
||||
self.assertEqual(mgmt_ip_address_input, mgmt_info['VDU1'])
|
||||
|
||||
# Verify anti spoofing settings
|
||||
stack_id = vnf_show_out['instance_id']
|
||||
template_dict = tosca_dict['topology_template']['node_templates']
|
||||
for field in template_dict:
|
||||
prop_dict = template_dict[field]['properties']
|
||||
if prop_dict.get('anti_spoofing_protection'):
|
||||
self.verify_antispoofing_in_stack(stack_id=stack_id,
|
||||
resource_name=field)
|
||||
|
||||
self.verify_vnf_crud_events(
|
||||
vnf_id, evt_constants.RES_EVT_CREATE,
|
||||
evt_constants.PENDING_CREATE, cnt=2)
|
||||
self.verify_vnf_crud_events(
|
||||
vnf_id, evt_constants.RES_EVT_CREATE, evt_constants.ACTIVE)
|
||||
return vnfd_id, vnf_id
|
||||
|
||||
def _test_delete_vnf(self, vnf_id):
|
||||
# Delete vnf_instance with vnf_id
|
||||
try:
|
||||
self.client.delete_vnf(vnf_id)
|
||||
except Exception:
|
||||
assert False, "vnf Delete failed"
|
||||
|
||||
self.wait_until_vnf_delete(vnf_id,
|
||||
constants.VNF_CIRROS_DELETE_TIMEOUT)
|
||||
self.verify_vnf_crud_events(vnf_id, evt_constants.RES_EVT_DELETE,
|
||||
evt_constants.PENDING_DELETE, cnt=2)
|
||||
|
||||
def _test_create_delete_vnf_tosca(self, vnfd_file, vnf_name,
|
||||
template_source, volume_id, volume_name):
|
||||
vnfd_idx, vnf_id = self._test_create_vnf(vnfd_file, vnf_name,
|
||||
volume_id, volume_name, template_source)
|
||||
servers = self.novaclient().servers.list()
|
||||
vdus = []
|
||||
for server in servers:
|
||||
vdus.append(server.name)
|
||||
self.assertIn('test-vdu-block-storage', vdus)
|
||||
|
||||
for server in servers:
|
||||
if server.name == 'test-vdu-block-storage':
|
||||
server_id = server.id
|
||||
server_volumes = self.novaclient().volumes\
|
||||
.get_server_volumes(server_id)
|
||||
self.assertTrue(len(server_volumes) > 0)
|
||||
self._test_delete_vnf(vnf_id)
|
||||
|
||||
def _test_create_cinder_volume(self):
|
||||
volume_name = 'my_vol'
|
||||
size = 1
|
||||
volume_id = self.create_cinder_volume(size, volume_name)
|
||||
self.assertIsNotNone(volume_id)
|
||||
|
||||
return volume_id, volume_name
|
||||
|
||||
def test_create_delete_vnf_tosca_from_vnfd(self):
|
||||
volume_id, volume_name = self._test_create_cinder_volume()
|
||||
self._test_create_delete_vnf_tosca(
|
||||
'sample-tosca-vnfd-existing-block-storage.yaml',
|
||||
'test_tosca_vnf_with_cirros',
|
||||
'onboarded', volume_id, volume_name)
|
||||
self.delete_cinder_volume(volume_id)
|
@ -0,0 +1,63 @@
|
||||
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
|
||||
|
||||
description: Demo example
|
||||
|
||||
metadata:
|
||||
template_name: sample-tosca-vnfd
|
||||
|
||||
topology_template:
|
||||
inputs:
|
||||
my_vol:
|
||||
default: 0dbf28ba-d0b7-4369-99ce-7a3c31dc996f
|
||||
description: volume id
|
||||
type: string
|
||||
node_templates:
|
||||
VDU1:
|
||||
type: tosca.nodes.nfv.VDU.Tacker
|
||||
capabilities:
|
||||
nfv_compute:
|
||||
properties:
|
||||
num_cpus: 1
|
||||
mem_size: 512 MB
|
||||
disk_size: 1 GB
|
||||
properties:
|
||||
name: test-vdu-block-storage
|
||||
image: cirros-0.4.0-x86_64-disk
|
||||
availability_zone: nova
|
||||
mgmt_driver: noop
|
||||
config: |
|
||||
param0: key1
|
||||
param1: key2
|
||||
|
||||
CP1:
|
||||
type: tosca.nodes.nfv.CP.Tacker
|
||||
properties:
|
||||
name: test-cp
|
||||
management: true
|
||||
order: 0
|
||||
anti_spoofing_protection: false
|
||||
requirements:
|
||||
- virtualLink:
|
||||
node: VL1
|
||||
- virtualBinding:
|
||||
node: VDU1
|
||||
|
||||
VB1:
|
||||
type: tosca.nodes.BlockStorage.Tacker
|
||||
properties:
|
||||
volume_id: my_vol
|
||||
|
||||
CB1:
|
||||
type: tosca.nodes.BlockStorageAttachment
|
||||
properties:
|
||||
location: /dev/vdb
|
||||
requirements:
|
||||
- virtualBinding:
|
||||
node: VDU1
|
||||
- virtualAttachment:
|
||||
node: VB1
|
||||
VL1:
|
||||
type: tosca.nodes.nfv.VL
|
||||
properties:
|
||||
network_name: net_mgmt
|
||||
vendor: Tacker
|
@ -298,3 +298,23 @@ class TestToscaUtils(testtools.TestCase):
|
||||
}
|
||||
volume_details = toscautils.get_block_storage_details(vnfd_dict)
|
||||
self.assertEqual(expected_dict, volume_details)
|
||||
|
||||
def test_get_block_storage_details_volume_id(self):
|
||||
tosca_vol = _get_template(
|
||||
'test-tosca-vnfd-existing-block-storage.yaml')
|
||||
vnfd_dict = yaml.safe_load(tosca_vol)
|
||||
expected_dict = {
|
||||
'volumes': {
|
||||
'VB1': {
|
||||
'volume_id': 'my_vol'
|
||||
}
|
||||
},
|
||||
'volume_attachments': {
|
||||
'CB1': {
|
||||
'instance_uuid': {'get_resource': 'VDU1'},
|
||||
'mountpoint': '/dev/vdb',
|
||||
'volume_id': {'get_param': 'my_vol'}}
|
||||
}
|
||||
}
|
||||
volume_details = toscautils.get_block_storage_details(vnfd_dict)
|
||||
self.assertEqual(expected_dict, volume_details)
|
||||
|
@ -376,6 +376,11 @@ node_types:
|
||||
image:
|
||||
type: string
|
||||
required: false
|
||||
size:
|
||||
type: scalar-unit.size
|
||||
required: false
|
||||
constraints:
|
||||
- greater_or_equal: 1 MB
|
||||
|
||||
tosca.nodes.BlockStorageAttachment:
|
||||
derived_from: tosca.nodes.Root
|
||||
|
@ -371,6 +371,10 @@ def get_volumes(template):
|
||||
continue
|
||||
volume_dict[node_name] = dict()
|
||||
block_properties = node_value.get('properties', {})
|
||||
if 'volume_id' in block_properties:
|
||||
volume_dict[node_name]['volume_id'] = block_properties['volume_id']
|
||||
del node_tpl[node_name]
|
||||
continue
|
||||
for prop_name, prop_value in block_properties.items():
|
||||
if prop_name == 'size':
|
||||
prop_value = \
|
||||
@ -381,7 +385,7 @@ def get_volumes(template):
|
||||
|
||||
|
||||
@log.log
|
||||
def get_vol_attachments(template):
|
||||
def get_vol_attachments(template, volume_dict):
|
||||
vol_attach_dict = dict()
|
||||
node_tpl = template['topology_template']['node_templates']
|
||||
valid_properties = {
|
||||
@ -404,8 +408,12 @@ def get_vol_attachments(template):
|
||||
vol_attach_dict[node_name]['instance_uuid'] = \
|
||||
{'get_resource': req['virtualBinding']['node']}
|
||||
elif 'virtualAttachment' in req:
|
||||
vol_attach_dict[node_name]['volume_id'] = \
|
||||
{'get_resource': req['virtualAttachment']['node']}
|
||||
node = req['virtualAttachment']['node']
|
||||
if 'volume_id' in volume_dict.get(node, {}):
|
||||
value = {'get_param': volume_dict[node]['volume_id']}
|
||||
else:
|
||||
value = {'get_resource': node}
|
||||
vol_attach_dict[node_name]['volume_id'] = value
|
||||
del node_tpl[node_name]
|
||||
return vol_attach_dict
|
||||
|
||||
@ -413,8 +421,10 @@ def get_vol_attachments(template):
|
||||
@log.log
|
||||
def get_block_storage_details(template):
|
||||
block_storage_details = dict()
|
||||
block_storage_details['volumes'] = get_volumes(template)
|
||||
block_storage_details['volume_attachments'] = get_vol_attachments(template)
|
||||
volume_dict = get_volumes(template)
|
||||
block_storage_details['volumes'] = volume_dict
|
||||
block_storage_details['volume_attachments'] = \
|
||||
get_vol_attachments(template, volume_dict)
|
||||
return block_storage_details
|
||||
|
||||
|
||||
@ -938,6 +948,8 @@ def _convert_grant_info_vdu(heat_dict, vdu_name, vnf_resources):
|
||||
def add_volume_resources(heat_dict, vol_res):
|
||||
# Add cinder volumes
|
||||
for res_name, cinder_vol in vol_res['volumes'].items():
|
||||
if 'volume_id' in cinder_vol:
|
||||
continue
|
||||
heat_dict['resources'][res_name] = {
|
||||
'type': 'OS::Cinder::Volume',
|
||||
'properties': {}
|
||||
|
@ -11,6 +11,7 @@ fixtures>=3.0.0 # Apache-2.0/BSD
|
||||
hacking>=4.0.0,<4.1.0 # Apache-2.0
|
||||
python-subunit>=1.0.0 # Apache-2.0/BSD
|
||||
python-tackerclient>=0.8.0 # Apache-2.0
|
||||
python-cinderclient>=3.3.0 # Apache-2.0
|
||||
oslotest>=3.2.0 # Apache-2.0
|
||||
stestr>=2.0.0 # Apache-2.0
|
||||
tempest>=22.0.0 # Apache-2.0
|
||||
|
Loading…
Reference in New Issue
Block a user