Enable creation of multiple VNFs from a package

This patch removes the restriction that only one VNF can be created from
a single VNF package when deploying CNFs.
The process of associating the `VDU.properties.name` in VNFD with the
actual resource name has changed by adding the `vdu_mapping` parameter
to InstantiateVnfRequest.additionalParams.

The patch also updates the documentation to remove the restriction and
add how to instantiate with `vdu_mapping` parameter.

Implements: blueprint remove-cnf-restriction
Change-Id: I32c041b8fd996b9b338fe97eb1604567fd6b6aaf
This commit is contained in:
Ayumu Ueha
2022-04-21 14:24:07 +00:00
parent 576b6e6eef
commit 44ef4b1c97
18 changed files with 839 additions and 301 deletions

View File

@@ -219,7 +219,8 @@ The following is a simple example of `deployment` resource.
- containerPort: 8080
protocol: TCP
.. note:: `metadata.name` in this file should be the same as
.. note:: If instatiate parameter does not contain `vdu_mapping`,
`metadata.name` in this file should be the same as
`properties.name` of the corresponding VDU in the deployment flavor
definition file.
For the example in this procedure, `metadata.name` is same as
@@ -475,7 +476,8 @@ values of the VNF.
number_of_instances: 3
targets: [ VDU1 ]
.. note:: `VDU1.properties.name` should be same as `metadata.name` that
.. note:: If instatiate parameter does not contain `vdu_mapping`,
`VDU1.properties.name` should be same as `metadata.name` that
defined in Kubernetes object file.
Therefore, `VDU1.properties.name` should be followed naming rules of
Kubernetes resource name. About detail of naming rules, please
@@ -687,6 +689,35 @@ vimId and vimType.
]
}
`additionalParams` can also contain `vdu_mapping` parameter.
In this case, specify the type and name of the resource corresponding to the
`VDU ID`` defined in the VNFD as follows:
.. code-block:: console
$ cat ./instance_kubernetes.json
{
"flavourId": "simple",
"additionalParams": {
"lcm-kubernetes-def-files": [
"Files/kubernetes/deployment.yaml"
],
"vdu_mapping": {
"VDU1": {
"kind": "Deployment",
"name": "curry-probe-test001"
}
}
},
"vimConnectionInfo": [
{
"id": "8a3adb69-0784-43c7-833e-aab0b6ab4470",
"vimId": "8d8373fe-6977-49ff-83ac-7756572ed186",
"vimType": "kubernetes"
}
]
}
2. Execute the Instantiation Command
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Run `openstack vnflcm instantiate <VNF instance ID> <json file>` to instantiate

View File

@@ -307,73 +307,6 @@ chart file instead of "deployment.yaml".
4. Create VNFD
~~~~~~~~~~~~~~
For the original documentation, see `5. Create VNFD`_.
To deploy CNF using Helm chart, modify the
``topology_template.node_templates.VDUxx.properties.name`` value in
"helloworld3_df_simple.yaml".
The following is an example of setting when using an external repository and a
local Helm chart file.
Refer to :ref:`Set the Value to the Request Parameter File for Helm chart` for
the correspondence between the set value and the parameter.
If you are using a chart file stored in external repository, the
``topology_template.node_templates.VDUxx.properties.name`` value should be
"<helmreleasename> - <helmchartname>".
.. note:: If this value is not set as above, scale operation will not work.
This limitation will be removed in the future by modifying
additionalParams.
The following shows the relationship between
``topology_template.node_templates.VDUxx.properties.name`` when using an
external repository and the resource definition file created in the step
`Instantiate VNF`_.
.. code-block:: console
$ cat instance_helm.json
{
"helmreleasename": "vdu1",
"helmchartname": "externalhelm",
}
$ cat Definitions/helloworld3_df_simple.yaml
topology_template:
node_templates:
VDU1:
properties:
name: vdu1-externalhelm
If you are using local Helm chart file,
``topology_template.node_templates.VDUxx.properties.name`` value should be
"<helmreleasename> - <part of helmchartfile_path>".
.. note:: "part of helmchart_path" is the part of file name without
"-<version>.tgz" at the end. In the following example, it is
"localhelm".
.. note:: If this value is not set as above, scale operation will not work.
This limitation will be removed in the future by modifying
additionalParams.
The following shows the relationship between
``topology_template.node_templates.VDUxx.properties.name`` when using an
external repository and the resource definition file created in the step
`Instantiate VNF`_.
.. code-block:: console
$ cat instance_helm.json
{
"helmreleasename": "vdu1",
"helmchartfile_path": "Files/kubernetes/localhelm-0.1.0.tgz"
}
$ cat Definitions/helloworld3_df_simple.yaml
topology_template:
node_templates:
VDU1:
properties:
name: vdu1-localhelm
Instantiate VNF
^^^^^^^^^^^^^^^
@@ -447,6 +380,18 @@ following parameter to the json definition file to deploy CNF by Helm chart.
| | | value: Parameter for the number of replicas defined in |
| | | Helm values. |
+----------------------------+-----------+-----------------------------------------------------------+
|vdu_mapping | Dict | Parameters for associating "VDU ID" with resource |
| | | information and helm install parameter. |
| | | "helmreleasename" in value shall be present if "use_helm" |
| | | is "true". |
| | | |
| | | key: "VDU ID" defined in VNFD. |
| | | value: Parameter for mapping resource information |
| | | corresponding to "VDU ID" for key like following: |
| | | "VDU1": { "kind": "Deployment", |
| | | "name": "resource-name", |
| | | "helmreleasename": "vdu1" } |
+----------------------------+-----------+-----------------------------------------------------------+
If you are deploying using a chart file stored in external repository, set
``additionalParams.using_helm_install_param.exthelmchart`` to ``true``
@@ -477,6 +422,13 @@ a chart file stored in an external repository.
],
"helm_replica_values": {
"vdu1_aspect": "replicaCount"
},
"vdu_mapping": {
"VDU1": {
"kind": "Deployment",
"name": "vdu1-externalhelm",
"helmreleasename": "vdu1"
}
}
},
"vimConnectionInfo": [
@@ -488,10 +440,6 @@ a chart file stored in an external repository.
]
}
.. note:: The "helmreleasename" and "helmchartname" in the json file must
match the ``topology_template.node_templates.VDUxx.properties.name``
value set in the VNFD.
If you are deploying using a local Helm chart file, set
``additionalParams.using_helm_install_param.exthelmchart`` to "false"
and set other parameters.
@@ -519,6 +467,13 @@ a local Helm chart file.
],
"helm_replica_values": {
"vdu1_aspect": "replicaCount"
},
"vdu_mapping": {
"VDU1": {
"kind": "Deployment",
"name": "vdu1-localhelm",
"helmreleasename": "vdu1"
}
}
},
"vimConnectionInfo": [
@@ -530,16 +485,12 @@ a local Helm chart file.
]
}
.. note:: The "helmreleasename" and "helmchartfile_path" in the json file must
match the ``topology_template.node_templates.VDUxx.properties.name``
value set in the VNFD.
2. Check the Deployment in Kubernetes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For the original documentation, see `4. Check the Deployment in Kubernetes`_ .
In addition to checkpoints before modifying the procedure, ensure that the NAME
of the deployed CNF matches the value of
``topology_template.node_templates.VDUxx.properties.name`` in the VNFD.
``vdu_mapping.VDUxx.name`` in ```additionalParams``.
.. code-block:: console

View File

@@ -0,0 +1,105 @@
tosca_definitions_version: tosca_simple_yaml_1_2
description: Simple deployment flavour for Sample VNF
imports:
- etsi_nfv_sol001_common_types.yaml
- etsi_nfv_sol001_vnfd_types.yaml
- helloworld3_types.yaml
topology_template:
inputs:
descriptor_id:
type: string
descriptor_version:
type: string
provider:
type: string
product_name:
type: string
software_version:
type: string
vnfm_info:
type: list
entry_schema:
type: string
flavour_id:
type: string
flavour_description:
type: string
substitution_mappings:
node_type: company.provider.VNF
properties:
flavour_id: vdumap
requirements:
virtual_link_external: []
node_templates:
VNF:
type: company.provider.VNF
properties:
flavour_description: A simple flavour
VDU1:
type: tosca.nodes.nfv.Vdu.Compute
properties:
name: VDU1
description: VDU1 compute node
vdu_profile:
min_number_of_instances: 1
max_number_of_instances: 3
policies:
- scaling_aspects:
type: tosca.policies.nfv.ScalingAspects
properties:
aspects:
vdu1_aspect:
name: vdu1_aspect
description: vdu1 scaling aspect
max_scale_level: 2
step_deltas:
- delta_1
- VDU1_initial_delta:
type: tosca.policies.nfv.VduInitialDelta
properties:
initial_delta:
number_of_instances: 1
targets: [ VDU1 ]
- VDU1_scaling_aspect_deltas:
type: tosca.policies.nfv.VduScalingAspectDeltas
properties:
aspect: vdu1_aspect
deltas:
delta_1:
number_of_instances: 1
targets: [ VDU1 ]
- instantiation_levels:
type: tosca.policies.nfv.InstantiationLevels
properties:
levels:
instantiation_level_1:
description: Smallest size
scale_info:
vdu1_aspect:
scale_level: 0
instantiation_level_2:
description: Largest size
scale_info:
vdu1_aspect:
scale_level: 2
default_level: instantiation_level_1
- VDU1_instantiation_levels:
type: tosca.policies.nfv.VduInstantiationLevels
properties:
levels:
instantiation_level_1:
number_of_instances: 1
instantiation_level_2:
number_of_instances: 3
targets: [ VDU1 ]

View File

@@ -7,6 +7,7 @@ imports:
- etsi_nfv_sol001_vnfd_types.yaml
- helloworld3_types.yaml
- helloworld3_df_simple.yaml
- helloworld3_df_vdumap.yaml
topology_template:
inputs:

View File

@@ -0,0 +1,29 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: vdumap1
namespace: default
spec:
replicas: 1
selector:
matchLabels:
selector: curry-probe-test001
template:
metadata:
labels:
selector: curry-probe-test001
app: webserver
spec:
containers:
- name: nginx-liveness-probe
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
protocol: TCP
- image: celebdor/kuryr-demo
imagePullPolicy: IfNotPresent
name: kuryr-demo-readiness-probe
ports:
- containerPort: 8080
protocol: TCP

View File

@@ -0,0 +1,29 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: vdumap2
namespace: default
spec:
replicas: 1
selector:
matchLabels:
selector: curry-probe-test001
template:
metadata:
labels:
selector: curry-probe-test001
app: webserver
spec:
containers:
- name: nginx-liveness-probe
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
protocol: TCP
- image: celebdor/kuryr-demo
imagePullPolicy: IfNotPresent
name: kuryr-demo-readiness-probe
ports:
- containerPort: 8080
protocol: TCP

View File

@@ -142,3 +142,13 @@ Name: Files/kubernetes/statefulset_fail.yaml
Content-Type: test-data
Algorithm: SHA-256
Hash: 71a99017964b8ce1dfbbade92f3cdf42f1e5b774c6e7edb7aa83c5eee42e5d5e
Name: Files/kubernetes/deployment_vdumap1.yaml
Content-Type: test-data
Algorithm: SHA-256
Hash: a266edaa404d080b7e2d6eb9dc17fdefdf0449ae8189487e0746141e0424603d
Name: Files/kubernetes/deployment_vdumap2.yaml
Content-Type: test-data
Algorithm: SHA-256
Hash: a847e345e1d7120b53f1d27b3bbfec38b86780ed67d4780d0934352304b25ab4

View File

@@ -44,7 +44,7 @@ topology_template:
VDU1:
type: tosca.nodes.nfv.Vdu.Compute
properties:
name: vdu1-localhelm
name: VDU1
description: kubernetes resource as VDU1
vdu_profile:
min_number_of_instances: 1
@@ -53,7 +53,7 @@ topology_template:
VDU2:
type: tosca.nodes.nfv.Vdu.Compute
properties:
name: vdu2-apache
name: VDU2
description: kubernetes resource as VDU2
vdu_profile:
min_number_of_instances: 1
@@ -147,5 +147,5 @@ topology_template:
number_of_instances: 1
instantiation_level_2:
number_of_instances: 3
targets: [ VDU1 ]
targets: [ VDU2 ]

View File

@@ -460,6 +460,25 @@ class BaseVnfLcmKubernetesTest(base.BaseTackerTest):
return scale_level
def _test_scale_out_and_in(self, vnf_instance, aspect_id,
number_of_steps=1, error=False):
scale_level = self._get_scale_level_by_aspect_id(
vnf_instance, aspect_id)
# test scale out
scale_level = self._test_scale(
vnf_instance['id'], 'SCALE_OUT', aspect_id, scale_level,
number_of_steps, error)
if error:
return scale_level
# test scale in
scale_level = self._test_scale(
vnf_instance['id'], 'SCALE_IN', aspect_id, scale_level,
number_of_steps)
return scale_level
def _test_heal(self, vnf_instance, vnfc_instance_id):
before_vnfc_rscs = self._get_vnfc_resource_info(vnf_instance)
self._heal_vnf_instance(vnf_instance['id'], vnfc_instance_id)

View File

@@ -303,3 +303,84 @@ class VnfLcmKubernetesTest(vnflcm_base.BaseVnfLcmKubernetesTest):
vnf_instance['id'], request_body, wait_state="FAILED_TEMP")
self._test_rollback_cnf_instantiate(vnf_instance['id'])
self._delete_vnf_instance(vnf_instance['id'])
def test_cnf_with_vdu_mapping(self):
"""Test CNF LCM with vdu_mapping parameter.
Tests that multiple VNFs can be created from one VNF Package by using
the `vdu_mapping` parameter.
"""
# create VNF1
_, vnf_instance1 = self._create_vnf_instance(
self.vnfd_id,
vnf_instance_name="cnf_with_vdu_mapping_1",
vnf_instance_description="cnf with vdu_mapping 1")
self.assertIsNotNone(vnf_instance1['id'])
# create VNF2
_, vnf_instance2 = self._create_vnf_instance(
self.vnfd_id,
vnf_instance_name="cnf_with_vdu_mapping_2",
vnf_instance_description="cnf with vdu_mapping 2")
self.assertIsNotNone(vnf_instance2['id'])
# instantiate VNF1
additional_param1 = {
"lcm-kubernetes-def-files": [
"Files/kubernetes/deployment_vdumap1.yaml",
],
"vdu_mapping": {
"VDU1": {
"name": "vdumap1",
"kind": "Deployment"
}
}
}
request_body1 = self._instantiate_vnf_instance_request(
"vdumap", vim_id=self.vim_id, additional_param=additional_param1)
self._instantiate_vnf_instance(vnf_instance1['id'], request_body1)
# instantiate VNF2
additional_param2 = {
"lcm-kubernetes-def-files": [
"Files/kubernetes/deployment_vdumap2.yaml",
],
"vdu_mapping": {
"VDU1": {
"name": "vdumap2",
"kind": "Deployment"
}
}
}
request_body2 = self._instantiate_vnf_instance_request(
"vdumap", vim_id=self.vim_id, additional_param=additional_param2)
self._instantiate_vnf_instance(vnf_instance2['id'], request_body2)
# scale VNF1
vnf_instance1 = self._show_vnf_instance(vnf_instance1['id'])
self._test_scale_out_and_in(vnf_instance1, "vdu1_aspect")
# scale VNF2
vnf_instance2 = self._show_vnf_instance(vnf_instance2['id'])
self._test_scale_out_and_in(vnf_instance2, "vdu1_aspect")
# heal VNF1 SOL-002 (partial heal)
vnf_instance1 = self._show_vnf_instance(vnf_instance1['id'])
vnf1_vnfc_rscs = self._get_vnfc_resource_info(vnf_instance1)
self._test_heal(vnf_instance1, [vnf1_vnfc_rscs[0]['id']])
# heal VNF2 SOL-002 (partial heal)
vnf_instance2 = self._show_vnf_instance(vnf_instance2['id'])
vnf2_vnfc_rscs = self._get_vnfc_resource_info(vnf_instance2)
self._test_heal(vnf_instance2, [vnf2_vnfc_rscs[0]['id']])
# heal VNF1 SOL-003 (entire heal)
vnf_instance1 = self._show_vnf_instance(vnf_instance1['id'])
self._test_heal(vnf_instance1, [])
# heal VNF2 SOL-003 (entire heal)
vnf_instance2 = self._show_vnf_instance(vnf_instance2['id'])
self._test_heal(vnf_instance2, [])
# terminate VNF1
self._terminate_vnf_instance(vnf_instance1['id'])
# terminate VNF2
self._terminate_vnf_instance(vnf_instance2['id'])
self._delete_vnf_instance(vnf_instance1['id'])
self._delete_vnf_instance(vnf_instance2['id'])

View File

@@ -124,6 +124,18 @@ class VnfLcmKubernetesHelmTest(vnflcm_base.BaseVnfLcmKubernetesTest):
"helm_replica_values": {
"vdu1_aspect": "replicaCount",
"vdu2_aspect": "replicaCount"
},
"vdu_mapping": {
"VDU1": {
"name": "vdu1-localhelm",
"kind": "Deployment",
"helmreleasename": "vdu1"
},
"VDU2": {
"name": "vdu2-apache",
"kind": "Deployment",
"helmreleasename": "vdu2"
}
}
}
vnf_instance = self._create_and_instantiate_vnf_instance(

View File

@@ -29,25 +29,6 @@ class VnfLcmKubernetesMultiNsTest(vnflcm_base.BaseVnfLcmKubernetesTest):
def tearDownClass(cls):
super(VnfLcmKubernetesMultiNsTest, cls).tearDownClass()
def _test_cnf_scale(self, vnf_instance, aspect_id,
number_of_steps=1, error=False):
scale_level = self._get_scale_level_by_aspect_id(
vnf_instance, aspect_id)
# test scale out
scale_level = self._test_scale(
vnf_instance['id'], 'SCALE_OUT', aspect_id, scale_level,
number_of_steps, error)
if error:
return scale_level
# test scale in
scale_level = self._test_scale(
vnf_instance['id'], 'SCALE_IN', aspect_id, scale_level,
number_of_steps)
return scale_level
def test_multi_tenant_k8s_additional_params(self):
vnf_instance_name = "multi_tenant_k8s_additional_params"
vnf_instance_description = "multi tenant k8s additional params"
@@ -61,7 +42,8 @@ class VnfLcmKubernetesMultiNsTest(vnflcm_base.BaseVnfLcmKubernetesTest):
self.vnfd_id, "simple", vnf_instance_name,
vnf_instance_description, additional_param)
# scale
self._test_cnf_scale(vnf_instance, "vdu1_aspect", number_of_steps=1)
self._test_scale_out_and_in(
vnf_instance, "vdu1_aspect", number_of_steps=1)
before_vnfc_rscs = self._get_vnfc_resource_info(vnf_instance)
deployment_target_vnfc = [vnfc_rsc for vnfc_rsc in before_vnfc_rscs if
@@ -91,7 +73,8 @@ class VnfLcmKubernetesMultiNsTest(vnflcm_base.BaseVnfLcmKubernetesTest):
self.vnfd_id, "simple", vnf_instance_name,
vnf_instance_description, additional_param)
# scale
self._test_cnf_scale(vnf_instance, "vdu1_aspect", number_of_steps=1)
self._test_scale_out_and_in(
vnf_instance, "vdu1_aspect", number_of_steps=1)
before_vnfc_rscs = self._get_vnfc_resource_info(vnf_instance)
deployment_target_vnfc = [vnfc_rsc for vnfc_rsc in before_vnfc_rscs if
@@ -120,7 +103,8 @@ class VnfLcmKubernetesMultiNsTest(vnflcm_base.BaseVnfLcmKubernetesTest):
self.vnfd_id, "simple", vnf_instance_name,
vnf_instance_description, additional_param)
# scale
self._test_cnf_scale(vnf_instance, "vdu2_aspect", number_of_steps=1)
self._test_scale_out_and_in(
vnf_instance, "vdu2_aspect", number_of_steps=1)
before_vnfc_rscs = self._get_vnfc_resource_info(vnf_instance)
deployment_target_vnfc = [vnfc_rsc for vnfc_rsc in before_vnfc_rscs if

View File

@@ -32,25 +32,6 @@ class VnfLcmKubernetesScaleTest(vnflcm_base.BaseVnfLcmKubernetesTest):
def tearDownClass(cls):
super(VnfLcmKubernetesScaleTest, cls).tearDownClass()
def _test_cnf_scale(self, vnf_instance, aspect_id,
number_of_steps=1, error=False):
scale_level = self._get_scale_level_by_aspect_id(
vnf_instance, aspect_id)
# test scale out
scale_level = self._test_scale(
vnf_instance['id'], 'SCALE_OUT', aspect_id, scale_level,
number_of_steps, error)
if error:
return scale_level
# test scale in
scale_level = self._test_scale(
vnf_instance['id'], 'SCALE_IN', aspect_id, scale_level,
number_of_steps)
return scale_level
def test_scale_cnf_with_statefulset(self):
"""Test scale for CNF (StatefulSet)
@@ -64,7 +45,7 @@ class VnfLcmKubernetesScaleTest(vnflcm_base.BaseVnfLcmKubernetesTest):
vnf_instance = self._create_and_instantiate_vnf_instance(
self.vnfd_id, "simple", vnf_instance_name,
vnf_instance_description, inst_additional_param)
self._test_cnf_scale(vnf_instance, "vdu1_aspect")
self._test_scale_out_and_in(vnf_instance, "vdu1_aspect")
self._terminate_vnf_instance(vnf_instance['id'])
self._delete_vnf_instance(vnf_instance['id'])
@@ -81,7 +62,7 @@ class VnfLcmKubernetesScaleTest(vnflcm_base.BaseVnfLcmKubernetesTest):
vnf_instance = self._create_and_instantiate_vnf_instance(
self.vnfd_id, "simple", vnf_instance_name,
vnf_instance_description, inst_additional_param)
self._test_cnf_scale(vnf_instance, "vdu1_aspect")
self._test_scale_out_and_in(vnf_instance, "vdu1_aspect")
self._terminate_vnf_instance(vnf_instance['id'])
self._delete_vnf_instance(vnf_instance['id'])
@@ -101,7 +82,8 @@ class VnfLcmKubernetesScaleTest(vnflcm_base.BaseVnfLcmKubernetesTest):
self.vnfd_id, "scalingsteps", vnf_instance_name,
vnf_instance_description, inst_additional_param)
# Use flavour_id scalingsteps that is set to delta_num=2
self._test_cnf_scale(vnf_instance, "vdu1_aspect", number_of_steps=2)
self._test_scale_out_and_in(
vnf_instance, "vdu1_aspect", number_of_steps=2)
self._terminate_vnf_instance(vnf_instance['id'])
self._delete_vnf_instance(vnf_instance['id'])
@@ -121,8 +103,8 @@ class VnfLcmKubernetesScaleTest(vnflcm_base.BaseVnfLcmKubernetesTest):
vnf_instance_description, inst_additional_param)
# fail scale out for rollback
aspect_id = "vdu1_aspect"
previous_level = self._test_cnf_scale(vnf_instance, aspect_id,
number_of_steps=2, error=True)
previous_level = self._test_scale_out_and_in(
vnf_instance, aspect_id, number_of_steps=2, error=True)
# test rollback
self._test_rollback_cnf_scale(
vnf_instance['id'], aspect_id, previous_level)

View File

@@ -1304,7 +1304,7 @@ def vnf_dict_cnf():
return vnf_dict
def vnfd_dict_cnf():
def vnfd_dict_cnf(vdu_num=1):
tacker_dir = os.getcwd()
def_dir = tacker_dir + "/samples/vnf_packages/Definitions/"
vnfd_dict = {
@@ -1319,42 +1319,21 @@ def vnfd_dict_cnf():
"VNF": {
"type": "company.provider.VNF",
"properties": {
"flavour_description": "A simple flavour"}},
"VDU1": {
"type": "tosca.nodes.nfv.Vdu.Compute",
"properties": {
"name": "vdu1",
"description": "vdu1 compute node",
"vdu_profile": {
"min_number_of_instances": 1,
"max_number_of_instances": 3}}}},
"flavour_description": "A simple flavour"}}
# add VDU defininition later
},
"policies": [
{
"scaling_aspects": {
"type": "tosca.policies.nfv.ScalingAspects",
"properties": {
"aspects": {
"vdu1_aspect": {
"name": "vdu1_aspect",
"description": "vdu1 scaling aspect",
"max_scale_level": 2,
"step_deltas": ["delta_1"]}}}}},
{
"vdu1_initial_delta": {
"type": "tosca.policies.nfv.VduInitialDelta",
"properties": {
"initial_delta": {
"number_of_instances": 0}},
"targets": ["VDU1"]}},
{
"vdu1_scaling_aspect_deltas": {
"type": "tosca.policies.nfv.VduScalingAspectDeltas",
"properties": {
"aspect": "vdu1_aspect",
"deltas": {
"delta_1": {
"number_of_instances": 1}}},
"targets": ["VDU1"]}},
# add aspects later
}}}},
# add following policies definitions later
# - tosca.policies.nfv.VduInitialDelta
# - tosca.policies.nfv.VduScalingAspectDeltas
# - tosca.policies.nfv.VduInstantiationLevels
{
"instantiation_levels": {
"type": "tosca.policies.nfv.InstantiationLevels",
@@ -1363,28 +1342,64 @@ def vnfd_dict_cnf():
"instantiation_level_1": {
"description": "Smallest size",
"scale_info": {
"vdu1_aspect": {
"scale_level": 0}}},
# add scale_info later
}},
"instantiation_level_2": {
"description": "Largest size",
"scale_info": {
"vdu1_aspect": {
"scale_level": 2}}}
# add scale_info later
}}
},
"default_level": "instantiation_level_1"}}},
{
"vdu1_instantiation_levels": {
"type": "tosca.policies.nfv.VduInstantiationLevels",
"properties": {
"levels": {
"instantiation_level_1": {
"number_of_instances": 0},
"instantiation_level_2": {
"number_of_instances": 2}}},
"targets": ["VDU1"]}}
"default_level": "instantiation_level_1"}}}
]
}
}
topology = vnfd_dict["topology_template"]
node_templates = topology["node_templates"]
policies = topology["policies"]
scaling_aspects = policies[0]["scaling_aspects"]
levels = policies[1]["instantiation_levels"]["properties"]["levels"]
for i in range(1, vdu_num + 1):
node_templates[f"VDU{i}"] = {
"type": "tosca.nodes.nfv.Vdu.Compute",
"properties": {
"name": f"vdu{i}",
"description": f"vdu{i} compute node",
"vdu_profile": {
"min_number_of_instances": 1,
"max_number_of_instances": 3}}}
scaling_aspects["properties"]["aspects"][f"vdu{i}_aspect"] = {
"name": f"vdu{i}_aspect",
"description": f"vdu{i} scaling aspect",
"max_scale_level": 2,
"step_deltas": ["delta_1"]}
policies.append({f"vdu{i}_initial_delta": {
"type": "tosca.policies.nfv.VduInitialDelta",
"properties": {
"initial_delta": {
"number_of_instances": 0}},
"targets": [f"VDU{i}"]}})
policies.append({f"vdu{i}_scaling_aspect_deltas": {
"type": "tosca.policies.nfv.VduScalingAspectDeltas",
"properties": {
"aspect": f"vdu{i}_aspect",
"deltas": {
"delta_1": {
"number_of_instances": 1}}},
"targets": [f"VDU{i}"]}})
policies.append({f"vdu{i}_instantiation_levels": {
"type": "tosca.policies.nfv.VduInstantiationLevels",
"properties": {
"levels": {
"instantiation_level_1": {
"number_of_instances": 0},
"instantiation_level_2": {
"number_of_instances": 2}}},
"targets": [f"VDU{i}"]}})
levels["instantiation_level_1"]["scale_info"] = {
f"vdu{i}_aspect": {"scale_level": 0}}
levels["instantiation_level_2"]["scale_info"] = {
f"vdu{i}_aspect": {"scale_level": 2}}
return vnfd_dict

View File

@@ -1151,7 +1151,9 @@ def fake_vim_connection_info_with_extra(del_field=None, multi_ip=False):
def fake_inst_vnf_req_for_helmchart(external=True, local=True, namespace=None):
additional_params = {"use_helm": "true"}
using_helm_install_param = list()
using_helm_install_param = []
vdu_mapping = {}
vdu_num = 0
if external:
using_helm_install_param.append(
{
@@ -1162,6 +1164,12 @@ def fake_inst_vnf_req_for_helmchart(external=True, local=True, namespace=None):
"exthelmrepo_url": "http://helmrepo.example.com/sample-charts"
}
)
vdu_num += 1
vdu_mapping[f"VDU{vdu_num}"] = {
"kind": "Deployment",
"name": f"vdu{vdu_num}",
"helmreleasename": "myrelease-ext"
}
if local:
using_helm_install_param.append(
{
@@ -1174,8 +1182,16 @@ def fake_inst_vnf_req_for_helmchart(external=True, local=True, namespace=None):
]
}
)
vdu_num += 1
vdu_mapping[f"VDU{vdu_num}"] = {
"kind": "Deployment",
"name": f"vdu{vdu_num}",
"helmreleasename": "myrelease-local"
}
additional_params['using_helm_install_param'] = using_helm_install_param
additional_params['helm_replica_values'] = {"vdu1_aspect": "replicaCount"}
additional_params['helm_replica_values'] = {
f"vdu{i}_aspect": "replicaCount" for i in range(1, vdu_num + 1)}
additional_params['vdu_mapping'] = vdu_mapping
if namespace:
additional_params['namespace'] = namespace

View File

@@ -692,6 +692,79 @@ class TestKubernetes(base.TestCase):
self.assertEqual(item[0].resource_name, 'curry-endpoint-test001')
self.assertEqual(item[0].resource_type, 'v1,Pod')
@mock.patch('tacker.vnflcm.utils._get_vnfd_dict')
@mock.patch('tacker.objects.vnf_instance.VnfInstance.save')
@mock.patch.object(vnf_package.VnfPackage, "get_by_id")
@mock.patch.object(vnf_package_vnfd.VnfPackageVnfd, "get_by_id")
def test_pre_instantiation_vnf_with_vdu_mapping(
self, mock_vnfd_by_id, mock_vnf_by_id, mock_save, mock_vnfd_dict):
vnf_instance = fd_utils.get_vnf_instance_object()
vim_connection_info = None
vnf_software_images = None
vnf_package_path = self.yaml_path
instantiate_vnf_req = objects.InstantiateVnfRequest(
flavour_id='simple',
additional_params={
'lcm-kubernetes-def-files':
["testdata_artifact_file_content.yaml"],
'vdu_mapping': {
'VDU1': {
'name': 'curry-endpoint-test001',
'kind': 'Pod'
}}
}
)
fake_vnfd_get_by_id = models.VnfPackageVnfd()
fake_vnfd_get_by_id.package_uuid = "f8c35bd0-4d67" \
"-4436-9f11-14b8a84c92aa"
fake_vnfd_get_by_id.vnfd_id = "f8c35bd0-4d67-4436-9f11-14b8a84c92aa"
fake_vnfd_get_by_id.vnf_provider = "fake_provider"
fake_vnfd_get_by_id.vnf_product_name = "fake_providername"
fake_vnfd_get_by_id.vnf_software_version = "fake_software_version"
fake_vnfd_get_by_id.vnfd_version = "fake_vnfd_version"
mock_vnfd_by_id.return_value = fake_vnfd_get_by_id
fake_vnf_get_by_id = models.VnfPackage()
fake_vnf_get_by_id.onboarding_state = "ONBOARD"
fake_vnf_get_by_id.operational_state = "ENABLED"
fake_vnf_get_by_id.usage_state = "NOT_IN_USE"
fake_vnf_get_by_id.size = 128
mock_artifacts = models.VnfPackageArtifactInfo()
mock_artifacts.package_uuid = "f8c35bd0-4d67-4436-9f11-14b8a84c92aa"
mock_artifacts.artifact_path = "testdata_artifact_file_content.yaml"
mock_artifacts.algorithm = "SHA-256"
mock_artifacts.hash = "fake_hash"
fake_vnf_get_by_id.vnf_artifacts = [mock_artifacts]
mock_vnf_by_id.return_value = fake_vnf_get_by_id
mock_vnfd_dict.return_value = vnflcm_fakes.vnfd_dict_cnf()
new_k8s_objs = self.kubernetes.pre_instantiation_vnf(
self.context, vnf_instance, vim_connection_info,
vnf_software_images,
instantiate_vnf_req, vnf_package_path)
for item in new_k8s_objs.values():
self.assertEqual(item[0].resource_name, 'curry-endpoint-test001')
self.assertEqual(item[0].resource_type, 'v1,Pod')
def test_validate_vdu_id_in_vdu_mapping_fail(self):
vdu_mapping = {'VDUX': {'name': 'dummy_name', 'kind': 'Pod'}}
vnfd = vnflcm_fakes.vnfd_dict_cnf()
exc = self.assertRaises(exceptions.InvalidInput,
self.kubernetes._validate_vdu_id_in_vdu_mapping,
vdu_mapping, vnfd)
msg = ("Parameter input values missing 'vdu_id={'VDU1'}' "
"in vdu_mapping")
self.assertEqual(msg, exc.format_message())
def test_validate_k8s_rsc_in_vdu_mapping_fail(self):
vdu_mapping = {'VDU1': {'name': 'dummy_name', 'kind': 'Pod'}}
kind = 'Pod'
name = 'invalid_name'
exc = self.assertRaises(exceptions.InvalidInput,
self.kubernetes._validate_k8s_rsc_in_vdu_mapping,
vdu_mapping, kind, name)
msg = (f"Parameter input values missing resource info '{kind}:{name}' "
"in vdu_mapping")
self.assertEqual(msg, exc.format_message())
def _delete_single_vnf_resource(self, mock_vnf_resource_list,
resource_name, resource_type,
terminate_vnf_req=None):
@@ -2333,6 +2406,67 @@ class TestKubernetes(base.TestCase):
mock_read_namespaced_deployment_scale.assert_called_once()
mock_patch_namespaced_deployment_scale.assert_called_once()
@mock.patch.object(client.AppsV1Api, 'patch_namespaced_deployment_scale')
@mock.patch.object(client.AppsV1Api, 'read_namespaced_deployment_scale')
@mock.patch.object(objects.VnfInstance, "get_by_id")
@mock.patch.object(objects.VnfResourceList, "get_by_vnf_instance_id")
def test_scale_out_with_vdu_mapping(self, mock_vnf_resource_list,
mock_vnf_instance_get_by_id,
mock_read_namespaced_deployment_scale,
mock_patch_namespaced_deployment_scale):
policy = fakes.get_scale_policy(type='out')
mock_vnf_resource_list.return_value = \
fakes.get_vnf_resource_list(kind='Deployment')
scale_status = objects.ScaleInfo(
aspect_id='vdu1_aspect', scale_level=1)
scale_vnf_instance = vnflcm_fakes.return_vnf_instance(
fields.VnfInstanceState.INSTANTIATED,
scale_status=scale_status)
scale_vnf_instance.vnf_metadata['namespace'] = "default"
vdu_mapping = {
'VDU1': {
'name': 'dummy-name',
'kind': 'Deployment'
}}
inst_vnf_info = scale_vnf_instance.instantiated_vnf_info
inst_vnf_info.additional_params['vdu_mapping'] = vdu_mapping
mock_vnf_instance_get_by_id.return_value = scale_vnf_instance
mock_read_namespaced_deployment_scale.return_value = \
client.V1Scale(spec=client.V1ScaleSpec(replicas=1),
status=client.V1ScaleStatus(replicas=1))
mock_patch_namespaced_deployment_scale.return_value = \
client.V1Scale(spec=client.V1ScaleSpec(replicas=2),
status=client.V1ScaleStatus(replicas=2))
self.kubernetes.scale(context=self.context, plugin=None,
auth_attr=utils.get_vim_auth_obj(),
policy=policy,
region_name=None)
mock_read_namespaced_deployment_scale.assert_called_once()
mock_patch_namespaced_deployment_scale.assert_called_once()
@mock.patch.object(objects.VnfInstance, "get_by_id")
@mock.patch.object(objects.VnfResourceList, "get_by_vnf_instance_id")
def test_scale_invalid_vdu_mapping(self, mock_vnf_resource_list,
mock_vnf_instance_get_by_id):
policy = fakes.get_scale_policy(type='in')
mock_vnf_resource_list.return_value = \
fakes.get_vnf_resource_list(kind='Pod')
scale_vnf_instance = vnflcm_fakes.return_vnf_instance(
fields.VnfInstanceState.INSTANTIATED)
scale_vnf_instance.vnf_metadata['namespace'] = "default"
vdu_mapping = {
'VDU2': {
'name': 'dummy-name',
'kind': 'Deployment'
}}
inst_vnf_info = scale_vnf_instance.instantiated_vnf_info
inst_vnf_info.additional_params['vdu_mapping'] = vdu_mapping
mock_vnf_instance_get_by_id.return_value = scale_vnf_instance
self.assertRaises(vnfm.CNFScaleFailed,
self.kubernetes.scale,
self.context, None,
utils.get_vim_auth_obj(), policy, None)
@mock.patch.object(objects.VnfInstance, "get_by_id")
@mock.patch.object(objects.VnfResourceList, "get_by_vnf_instance_id")
def test_scale_target_not_found(self, mock_vnf_resource_list,
@@ -2532,6 +2666,12 @@ class TestKubernetes(base.TestCase):
mock_read_namespaced_deployment_scale.return_value = \
client.V1Scale(spec=client.V1ScaleSpec(replicas=1),
status=client.V1ScaleStatus(replicas=1))
scale_status = objects.ScaleInfo(aspect_id='vdu1', scale_level=0)
scale_vnf_instance = vnflcm_fakes.return_vnf_instance(
fields.VnfInstanceState.INSTANTIATED,
scale_status=scale_status)
scale_vnf_instance.vnf_metadata['namespace'] = "default"
mock_vnf_instance.return_value = scale_vnf_instance
self.kubernetes.scale_wait(context=self.context, plugin=None,
auth_attr=utils.get_vim_auth_obj(),
policy=policy,
@@ -2556,6 +2696,12 @@ class TestKubernetes(base.TestCase):
mock_read_namespaced_stateful_set_scale.return_value = \
client.V1Scale(spec=client.V1ScaleSpec(replicas=1),
status=client.V1ScaleStatus(replicas=1))
scale_status = objects.ScaleInfo(aspect_id='vdu1', scale_level=0)
scale_vnf_instance = vnflcm_fakes.return_vnf_instance(
fields.VnfInstanceState.INSTANTIATED,
scale_status=scale_status)
scale_vnf_instance.vnf_metadata['namespace'] = "default"
mock_vnf_instance.return_value = scale_vnf_instance
self.kubernetes.scale_wait(context=self.context, plugin=None,
auth_attr=utils.get_vim_auth_obj(),
policy=policy,
@@ -2580,6 +2726,12 @@ class TestKubernetes(base.TestCase):
mock_read_namespaced_replica_set_scale.return_value = \
client.V1Scale(spec=client.V1ScaleSpec(replicas=1),
status=client.V1ScaleStatus(replicas=1))
scale_status = objects.ScaleInfo(aspect_id='vdu1', scale_level=0)
scale_vnf_instance = vnflcm_fakes.return_vnf_instance(
fields.VnfInstanceState.INSTANTIATED,
scale_status=scale_status)
scale_vnf_instance.vnf_metadata['namespace'] = "default"
mock_vnf_instance.return_value = scale_vnf_instance
self.kubernetes.scale_wait(context=self.context, plugin=None,
auth_attr=utils.get_vim_auth_obj(),
policy=policy,
@@ -2587,18 +2739,6 @@ class TestKubernetes(base.TestCase):
last_event_id=None)
mock_list_namespaced_pod.assert_called_once()
@mock.patch.object(objects.VnfInstance, "get_by_id")
@mock.patch.object(objects.VnfResourceList, "get_by_vnf_instance_id")
def test_scale_wait_target_not_found(
self, mock_vnf_resource_list, mock_vnf_instance):
policy = fakes.get_scale_policy(type='out')
mock_vnf_resource_list.return_value = \
fakes.get_vnf_resource_list(kind='Depoyment', name='other_name')
self.assertRaises(vnfm.CNFScaleWaitFailed,
self.kubernetes.scale_wait,
self.context, None,
utils.get_vim_auth_obj(), policy, None, None)
@mock.patch.object(objects.VnfInstance, "get_by_id")
@mock.patch.object(client.AppsV1Api, 'read_namespaced_deployment_scale')
@mock.patch.object(client.CoreV1Api, 'list_namespaced_pod')
@@ -2617,6 +2757,12 @@ class TestKubernetes(base.TestCase):
mock_read_namespaced_deployment_scale.return_value = \
client.V1Scale(spec=client.V1ScaleSpec(replicas=2),
status=client.V1ScaleStatus(replicas=2))
scale_status = objects.ScaleInfo(aspect_id='vdu1', scale_level=0)
scale_vnf_instance = vnflcm_fakes.return_vnf_instance(
fields.VnfInstanceState.INSTANTIATED,
scale_status=scale_status)
scale_vnf_instance.vnf_metadata['namespace'] = "default"
mock_vnf_instance.return_value = scale_vnf_instance
self.assertRaises(vnfm.CNFScaleWaitFailed,
self.kubernetes.scale_wait,
self.context, None,
@@ -2640,6 +2786,12 @@ class TestKubernetes(base.TestCase):
mock_read_namespaced_deployment_scale.return_value = \
client.V1Scale(spec=client.V1ScaleSpec(replicas=2),
status=client.V1ScaleStatus(replicas=2))
scale_status = objects.ScaleInfo(aspect_id='vdu1', scale_level=0)
scale_vnf_instance = vnflcm_fakes.return_vnf_instance(
fields.VnfInstanceState.INSTANTIATED,
scale_status=scale_status)
scale_vnf_instance.vnf_metadata['namespace'] = "default"
mock_vnf_instance.return_value = scale_vnf_instance
self.assertRaises(vnfm.CNFScaleWaitFailed,
self.kubernetes.scale_wait,
self.context, None,

View File

@@ -310,6 +310,31 @@ class TestKubernetesHelm(base.TestCase):
msg = f"Replica value for aspectId '{aspect_id}' is missing"
self.assertEqual(msg, exc.format_message())
@mock.patch.object(objects.VnfPackageVnfd, "get_by_id")
@mock.patch('tacker.vnflcm.utils._get_vnfd_dict')
def test_pre_helm_install_missing_helmreleasename_in_vdu_mapping(
self, mock_vnfd_dict, mock_vnf_package_vnfd_get_by_id):
vnf_instance = fd_utils.get_vnf_instance_object()
vim_connection_info = fakes.fake_vim_connection_info_with_extra()
vnf_package_path = self.package_path
instantiate_vnf_req = fakes.fake_inst_vnf_req_for_helmchart(
local=False)
helm_install_params = (instantiate_vnf_req
.additional_params['using_helm_install_param'])
helm_install_params[0]['helmreleasename'] = 'invalid_relname'
mock_vnfd_dict.return_value = vnflcm_fakes.vnfd_dict_cnf()
mock_vnf_package_vnfd_get_by_id.return_value = (
vnflcm_fakes.return_vnf_package_vnfd())
exc = self.assertRaises(exceptions.InvalidInput,
self.kubernetes._pre_helm_install,
self.context, vnf_instance,
vim_connection_info, instantiate_vnf_req,
vnf_package_path)
expect_relname = helm_install_params[0]['helmreleasename']
msg = ("Parameter input values missing 'helmreleasename="
f"{expect_relname}' in vdu_mapping")
self.assertEqual(msg, exc.format_message())
@mock.patch.object(objects.VnfResource, 'create')
@mock.patch.object(paramiko.Transport, 'close')
@mock.patch.object(paramiko.SFTPClient, 'put')
@@ -443,7 +468,7 @@ class TestKubernetesHelm(base.TestCase):
self, mock_vnfd_dict, mock_vnf_package_vnfd_get_by_id,
mock_list_namespaced_pod, mock_command):
vim_connection_info = fakes.fake_vim_connection_info_with_extra()
mock_vnfd_dict.return_value = vnflcm_fakes.vnfd_dict_cnf()
mock_vnfd_dict.return_value = vnflcm_fakes.vnfd_dict_cnf(vdu_num=2)
mock_vnf_package_vnfd_get_by_id.return_value = \
vnflcm_fakes.return_vnf_package_vnfd()
mock_list_namespaced_pod.return_value =\
@@ -678,3 +703,32 @@ class TestKubernetesHelm(base.TestCase):
msg = ("CNF Scale Failed with reason: The number of target replicas "
"after scaling [4] is out of range")
self.assertEqual(msg, exc.format_message())
@mock.patch.object(helm_client.HelmClient, '_execute_command')
@mock.patch.object(vim_client.VimClient, 'get_vim')
@mock.patch.object(objects.VnfInstance, "get_by_id")
def test_scale_param_not_found_in_vdu_mapping(
self, mock_vnf_instance_get_by_id, mock_get_vim, mock_command):
policy = fakes.get_scale_policy(type='out', aspect_id='vdu1_aspect',
vdu_name='vdu1')
scale_status = objects.ScaleInfo(
aspect_id='vdu1_aspect', scale_level=1)
mock_get_vim.return_value = fakes.fake_k8s_vim_obj()
vim_connection_info = fakes.fake_vim_connection_info_with_extra()
instantiate_vnf_req = fakes.fake_inst_vnf_req_for_helmchart()
vdu_mapping = instantiate_vnf_req.additional_params['vdu_mapping']
vdu_mapping['VDU1']['helmreleasename'] = 'invalid_releasename'
vnf_instance = copy.deepcopy(self.vnf_instance)
vnf_instance.vim_connection_info = [vim_connection_info]
vnf_instance.scale_status = [scale_status]
vnf_instance.instantiated_vnf_info.additional_params = (
instantiate_vnf_req.additional_params)
mock_vnf_instance_get_by_id.return_value = vnf_instance
mock_command.side_effect = fakes.execute_cmd_helm_client
exc = self.assertRaises(vnfm.CNFScaleFailed,
self.kubernetes.scale,
self.context, None, utils.get_vim_auth_obj(),
policy, None)
msg = ("CNF Scale Failed with reason: Appropriate parameter for "
"vdu1_aspect is not found in using_helm_install_param")
self.assertEqual(msg, exc.format_message())

View File

@@ -1195,31 +1195,35 @@ class Kubernetes(abstract_driver.VnfAbstractDriver,
helm_install_params = inst_additional_params.get(
'using_helm_install_param', [])
# Get releasename and chartname from Helm install params in Instantiate
# request parameter by using VDU properties name.
found_flag = False
for vdu_def in vdu_defs.values():
# request parameter by using vdu_mapping.
vdu_mapping = inst_additional_params.get('vdu_mapping')
for vdu_id, vdu_def in vdu_defs.items():
release_name = (vdu_mapping.get(vdu_id, {}).get('helmreleasename'))
if not release_name:
continue
helm_install_param = [
param for param in helm_install_params
if param.get('helmreleasename') == release_name]
if not helm_install_param:
error_reason = (f"Appropriate parameter for {aspect_id} is "
"not found in using_helm_install_param")
LOG.error(error_reason)
raise vnfm.CNFScaleFailed(reason=error_reason)
if self._is_exthelmchart(helm_install_param[0]):
chart_name = helm_install_param[0].get('helmchartname')
upgrade_chart_name = "/".join(
[helm_install_param[0].get('helmrepositoryname'),
chart_name])
else:
chartfile_path = helm_install_param[0].get(
'helmchartfile_path')
chartfile_name = chartfile_path[
chartfile_path.rfind(os.sep) + 1:]
chart_name = "-".join(chartfile_name.split("-")[:-1])
upgrade_chart_name = ("/var/tacker/helm/"
f"{vnf_instance.id}/{chart_name}")
vdu_properties = vdu_def.get('properties')
for helm_install_param in helm_install_params:
if self._is_exthelmchart(helm_install_param):
chart_name = helm_install_param.get('helmchartname')
upgrade_chart_name = "/".join(
[helm_install_param.get('helmrepositoryname'),
chart_name])
else:
chartfile_path = helm_install_param.get(
'helmchartfile_path')
chartfile_name = chartfile_path[
chartfile_path.rfind(os.sep) + 1:]
chart_name = "-".join(chartfile_name.split("-")[:-1])
upgrade_chart_name = ("/var/tacker/helm/"
f"{vnf_instance.id}/{chart_name}")
release_name = helm_install_param.get('helmreleasename')
resource_name = "-".join([release_name, chart_name])
if resource_name == vdu_properties.get('name'):
found_flag = True
break
if found_flag:
break
break
# Prepare for scale operation
helm_replica_values = inst_additional_params.get('helm_replica_values')
@@ -1256,6 +1260,54 @@ class Kubernetes(abstract_driver.VnfAbstractDriver,
return
def _get_scale_target_info(self, aspect_id, vdu_defs, vnf_resources,
vdu_mapping):
"""get information of scale target."""
if not vdu_mapping:
is_found = False
target_kinds = ["Deployment", "ReplicaSet", "StatefulSet"]
for vnf_resource in vnf_resources:
# The resource that matches the following is the
# resource to be scaled:
# The `name` of the resource stored in vnf_resource
# (the name defined in `metadata.name` of Kubernetes
# object file) matches the value of `properties.name`
# of VDU defined in VNFD.
# Infomation are stored in vnfc_resource as follows:
# - resource_name : "name"
# - resource_type : "api_version,kind"
name = vnf_resource.resource_name
for vdu_id, vdu_def in vdu_defs.items():
vdu_properties = vdu_def.get('properties')
if name == vdu_properties.get('name'):
_, kind = (vnf_resource.resource_type
.split(COMMA_CHARACTER))
if kind in target_kinds:
is_found = True
break
if is_found:
break
else:
error_reason = (
"Target VnfResource for aspectId"
f" {aspect_id} is not found in DB")
raise vnfm.CNFScaleFailed(reason=error_reason)
else:
# Get parameters from vdu_mapping with using vdu_id as key.
for vdu_id, vdu_def in vdu_defs.items():
vdu_map_value = vdu_mapping.get(vdu_id, {})
kind = vdu_map_value.get('kind')
name = vdu_map_value.get('name')
if kind and name:
vdu_properties = vdu_def.get('properties')
break
else:
error_reason = (
"Target vdu information for aspectId"
f" {aspect_id} is not found in vdu_mapping")
raise vnfm.CNFScaleFailed(reason=error_reason)
return kind, name, vdu_id, vdu_properties
@log.log
def scale(self, context, plugin, auth_attr, policy, region_name):
"""Scale function
@@ -1275,7 +1327,8 @@ class Kubernetes(abstract_driver.VnfAbstractDriver,
context, policy['vnf_instance_id'])
# check use_helm flag
inst_vnf_info = vnf_instance.instantiated_vnf_info
if self._is_use_helm_flag(inst_vnf_info.additional_params):
additional_params = inst_vnf_info.additional_params
if self._is_use_helm_flag(additional_params):
self._helm_scale(context, vnf_instance, policy)
return
namespace = vnf_instance.vnf_metadata['namespace']
@@ -1285,33 +1338,9 @@ class Kubernetes(abstract_driver.VnfAbstractDriver,
auth=auth_cred)
aspect_id = policy['name']
vdu_defs = policy['vdu_defs']
is_found = False
error_reason = None
target_kinds = ["Deployment", "ReplicaSet", "StatefulSet"]
for vnf_resource in vnf_resources:
# The resource that matches the following is the resource
# to be scaled:
# The `name` of the resource stored in vnf_resource (the
# name defined in `metadata.name` of Kubernetes object
# file) matches the value of `properties.name` of VDU
# defined in VNFD.
name = vnf_resource.resource_name
for vdu_id, vdu_def in vdu_defs.items():
vdu_properties = vdu_def.get('properties')
if name == vdu_properties.get('name'):
kind = vnf_resource.resource_type.\
split(COMMA_CHARACTER)[1]
if kind in target_kinds:
is_found = True
break
if is_found:
break
else:
error_reason = _(
"Target VnfResource for aspectId"
" {aspect_id} is not found in DB").format(
aspect_id=aspect_id)
raise vnfm.CNFScaleFailed(reason=error_reason)
vdu_mapping = additional_params.get('vdu_mapping')
kind, name, _, vdu_properties = self._get_scale_target_info(
aspect_id, vdu_defs, vnf_resources, vdu_mapping)
scale_info = self._call_read_scale_api(
app_v1_api_client=app_v1_api_client,
@@ -1330,7 +1359,7 @@ class Kubernetes(abstract_driver.VnfAbstractDriver,
min_replicas = vdu_profile.get('min_number_of_instances')
if (scale_replicas < min_replicas) or \
(scale_replicas > max_replicas):
error_reason = _(
error_reason = (
"The number of target replicas after"
" scaling [{after_replicas}] is out of range").\
format(
@@ -1446,27 +1475,11 @@ class Kubernetes(abstract_driver.VnfAbstractDriver,
auth=auth_cred)
aspect_id = policy['name']
vdu_defs = policy['vdu_defs']
is_found = False
error_reason = None
target_kinds = ["Deployment", "ReplicaSet", "StatefulSet"]
for vnf_resource in vnf_resources:
name = vnf_resource.resource_name
for vdu_id, vdu_def in vdu_defs.items():
vdu_properties = vdu_def.get('properties')
if name == vdu_properties.get('name'):
kind = vnf_resource.resource_type.\
split(COMMA_CHARACTER)[1]
if kind in target_kinds:
is_found = True
break
if is_found:
break
else:
error_reason = _(
"Target VnfResource for aspectId {aspect_id}"
" is not found in DB").format(
aspect_id=aspect_id)
raise vnfm.CNFScaleWaitFailed(reason=error_reason)
inst_vnf_info = vnf_instance.instantiated_vnf_info
additional_params = inst_vnf_info.additional_params
vdu_mapping = additional_params.get('vdu_mapping')
kind, name, _, _ = self._get_scale_target_info(
aspect_id, vdu_defs, vnf_resources, vdu_mapping)
scale_info = self._call_read_scale_api(
app_v1_api_client=app_v1_api_client,
@@ -1495,13 +1508,13 @@ class Kubernetes(abstract_driver.VnfAbstractDriver,
stack_retries = stack_retries - 1
time.sleep(self.STACK_RETRY_WAIT)
elif status == 'Unknown':
error_reason = _(
error_reason = (
"CNF Scale failed caused by the Pod status"
" is Unknown")
raise vnfm.CNFScaleWaitFailed(reason=error_reason)
if stack_retries == 0 and status != 'Running':
error_reason = _(
error_reason = (
"CNF Scale failed to complete within"
" {wait} seconds while waiting for the aspect_id"
" {aspect_id} to be scaled").format(
@@ -1564,6 +1577,31 @@ class Kubernetes(abstract_driver.VnfAbstractDriver,
return exthelmchart.lower() == 'true'
return bool(exthelmchart)
def _validate_vdu_id_in_vdu_mapping(self, vdu_mapping, vnfd):
"""validate vdu_id between vdu_mapping and VDU defined in VNFD."""
nodes = vnfd.get("topology_template", {}).get("node_templates", {})
vdu_ids = {name for name, data in nodes.items()
if data['type'] == 'tosca.nodes.nfv.Vdu.Compute'}
unfound_vdu_ids = {vdu_id for vdu_id in vdu_ids
if vdu_id not in vdu_mapping.keys()}
if unfound_vdu_ids:
error_reason = ("Parameter input values missing "
f"'vdu_id={str(unfound_vdu_ids)}' in vdu_mapping")
LOG.error(error_reason)
raise exceptions.InvalidInput(error_reason)
def _validate_k8s_rsc_in_vdu_mapping(self, vdu_mapping, kind, name):
"""validate k8s resource kind/name between vdu_mapping and manifest."""
if kind in ('Pod', 'Deployment', 'DaemonSet', 'StatefulSet',
'ReplicaSet'):
found_rsc = {values['name'] for values in vdu_mapping.values()
if values['kind'] == kind and values['name'] == name}
if not found_rsc:
error_reason = ("Parameter input values missing resource info "
f"'{kind}:{name}' in vdu_mapping")
LOG.error(error_reason)
raise exceptions.InvalidInput(error_reason)
def _pre_helm_install(self, context, vnf_instance, vim_connection_info,
instantiate_vnf_req, vnf_package_path):
def _check_param_exists(params_dict, check_param):
@@ -1627,6 +1665,22 @@ class Kubernetes(abstract_driver.VnfAbstractDriver,
raise exceptions.InvalidInput(
f"Replica value for aspectId '{aspect_id}' is missing")
# check vdu_mapping parameter
_check_param_exists(additional_params, 'vdu_mapping')
vdu_mapping = additional_params.get('vdu_mapping', {})
# check vdu_id between vdu_mapping and VNFD
self._validate_vdu_id_in_vdu_mapping(vdu_mapping, vnfd)
# check helmreleasename in vdu_mapping and using_helm_install_param
helmreleasenames = {values.get('helmreleasename')
for _, values in vdu_mapping.items()}
for helm_install_param in helm_install_param_list:
helmreleasename = helm_install_param.get('helmreleasename')
if helmreleasename not in helmreleasenames:
error_reason = ("Parameter input values missing "
f"'helmreleasename={helmreleasename}' in vdu_mapping")
LOG.error(error_reason)
raise exceptions.InvalidInput(error_reason)
def _get_target_k8s_files(self, instantiate_vnf_req):
if instantiate_vnf_req.additional_params and\
CNF_TARGET_FILES_KEY in\
@@ -1681,9 +1735,9 @@ class Kubernetes(abstract_driver.VnfAbstractDriver,
# and we will push the request to existed code
return vnf_resources
else:
vnfd = vnfd_obj.VnfPackageVnfd.get_by_id(
package_vnfd = vnfd_obj.VnfPackageVnfd.get_by_id(
context, vnf_instance.vnfd_id)
package_uuid = vnfd.package_uuid
package_uuid = package_vnfd.package_uuid
vnf_package = vnf_package_obj.VnfPackage.get_by_id(
context, package_uuid, expected_attrs=['vnf_artifacts'])
if vnf_package.vnf_artifacts:
@@ -1737,6 +1791,23 @@ class Kubernetes(abstract_driver.VnfAbstractDriver,
vnf_resources[target_k8s_index] = vnf_resources_temp
# check vdu_mapping parameter if exist
vdu_mapping = (instantiate_vnf_req.additional_params
.get('vdu_mapping'))
if vdu_mapping:
vnfd = vnflcm_utils.get_vnfd_dict(context,
vnf_instance.vnfd_id, instantiate_vnf_req.flavour_id)
# check vdu_id between vdu_mapping and VNFD
self._validate_vdu_id_in_vdu_mapping(vdu_mapping, vnfd)
for resources in vnf_resources.values():
for vnf_resource in resources:
name = vnf_resource.resource_name
_, kind = vnf_resource.resource_type.split(
COMMA_CHARACTER)
# check kind and name between vdu_mapping and manifest
self._validate_k8s_rsc_in_vdu_mapping(
vdu_mapping, kind, name)
LOG.debug(f"all manifest namespace and kind: {chk_namespaces}")
k8s_utils.check_and_save_namespace(
instantiate_vnf_req, chk_namespaces, vnf_instance)
@@ -1897,7 +1968,8 @@ class Kubernetes(abstract_driver.VnfAbstractDriver,
# initialize Transformer
transformer = translate_outputs.Transformer(
None, None, None, None)
if self._is_use_helm_flag(instantiate_vnf_req.additional_params):
additional_params = instantiate_vnf_req.additional_params
if self._is_use_helm_flag(additional_params):
k8s_objs = self._post_helm_install(context,
vim_connection_info, instantiate_vnf_req, transformer,
namespace)
@@ -1905,21 +1977,28 @@ class Kubernetes(abstract_driver.VnfAbstractDriver,
# get Kubernetes object
k8s_objs = transformer.get_k8s_objs_from_yaml(
target_k8s_files, vnf_package_path, namespace)
# get TOSCA node templates
vnfd_dict = vnflcm_utils._get_vnfd_dict(
context, vnf_instance.vnfd_id,
vnf_instance.instantiated_vnf_info.flavour_id)
tosca = tosca_template.ToscaTemplate(
parsed_params={}, a_file=False, yaml_dict_tpl=vnfd_dict)
tosca_node_tpls = tosca.topology_template.nodetemplates
# get vdu_ids dict {vdu_name(as pod_name): vdu_id}
vdu_ids = {}
for node_tpl in tosca_node_tpls:
for node_name, node_value in node_tpl.templates.items():
if node_value.get('type') == "tosca.nodes.nfv.Vdu.Compute":
vdu_id = node_name
vdu_name = node_value.get('properties').get('name')
vdu_ids[vdu_name] = vdu_id
vdu_mapping = additional_params.get('vdu_mapping')
if not vdu_mapping:
# get TOSCA node templates
vnfd_dict = vnflcm_utils._get_vnfd_dict(
context, vnf_instance.vnfd_id,
vnf_instance.instantiated_vnf_info.flavour_id)
tosca = tosca_template.ToscaTemplate(
parsed_params={}, a_file=False, yaml_dict_tpl=vnfd_dict)
tosca_node_tpls = tosca.topology_template.nodetemplates
# get vdu_ids dict {vdu_name: vdu_id} from VNFD
vdu_ids = {}
for node_tpl in tosca_node_tpls:
for node_name, node_value in node_tpl.templates.items():
if (node_value.get('type') ==
"tosca.nodes.nfv.Vdu.Compute"):
vdu_id = node_name
vdu_name = node_value.get('properties').get('name')
vdu_ids[vdu_name] = vdu_id
else:
# get vdu_ids dict {vdu_name: vdu_id} from vdu_mapping
vdu_ids = {values.get('name'): vdu_id
for vdu_id, values in vdu_mapping.items()}
# initialize Kubernetes APIs
core_v1_api_client = self.kubernetes.get_core_v1_api_client(
auth=auth_cred)
@@ -2433,29 +2512,17 @@ class Kubernetes(abstract_driver.VnfAbstractDriver,
a_file=False,
yaml_dict_tpl=vnfd_dict)
extract_policy_infos = vnflcm_utils.get_extract_policy_infos(tosca)
aspect_id = scale_vnf_request.aspect_id
vdu_defs = vnflcm_utils.get_target_vdu_def_dict(
extract_policy_infos=extract_policy_infos,
aspect_id=scale_vnf_request.aspect_id,
aspect_id=aspect_id,
tosca=tosca)
namespace = vnf_instance.vnf_metadata['namespace']
is_found = False
target_kinds = ["Deployment", "ReplicaSet", "StatefulSet"]
for vnf_resource in vnf_resources:
# For CNF operations, Kubernetes resource information is
# stored in vnfc_resource as follows:
# - resource_name : "name"
# - resource_type : "api_version,kind"
rsc_name = vnf_resource.resource_name
for vdu_id, vdu_def in vdu_defs.items():
vdu_properties = vdu_def.get('properties')
if rsc_name == vdu_properties.get('name'):
rsc_kind = vnf_resource.resource_type.split(',')[1]
target_vdu_id = vdu_id
if rsc_kind in target_kinds:
is_found = True
break
if is_found:
break
inst_vnf_info = vnf_instance.instantiated_vnf_info
vdu_mapping = (inst_vnf_info.additional_params
.get('vdu_mapping'))
rsc_kind, rsc_name, target_vdu_id, _ = self._get_scale_target_info(
aspect_id, vdu_defs, vnf_resources, vdu_mapping)
# extract stored Pod names by vdu_id
stored_pod_list = []
metadata = None