When creating a duplicated subscription, according to spec SOL003,
the api should return 303 See Other instead of 201 Created.
To fix this error, the following things is done in this patch:
1.Fix the error when the database virtual field extracts
the value from json in table `vnf_lcm_filters`.
2.Fix the error of sql query statement when querying subscription.
3.The attribute VnfProductsFromProviders should be an array, not an
object. Fix this error in UT, FT and schemas.
4.Fix typos when writing "object" as "objects" in dict "_versions"
in vnf_lcm schemas.
5.Rewrite the throwing and catching of SeeOther exception of this
api to fix the following errors: SeeOther should be thrown instead
of Exception; the return body should be empty instead of
ProblemDetail; the return header should contain location instead
The "test_instantiate_terminate_vnf_with_vnflcmnoop" method of
multinode-sol job occasionally fails.
After vnf termination, "instantiationState" becomes "NOT_INSTANTIATED".
However, the "vnflcmOpOccs.operation_state" is not "COMPLETED" at this
point and sleep for 20 seconds before assert operation_state.
If stack deletion is not completed within 20 seconds sleep, reading
"PROCESSING" when asserting operation_state results in misjudgment and
This patch resolves the error by waiting until "COMPLETED" instead of
We've been using old cirros-0.4.0 for long time, and don't need to fix
the version of cirros to 0.4.0. This patch is update to the latest
Most of the changes in this patch is replacing `image`, `hash` and
`version` in TOSCA definition files. The rest of them are:
* Replace images. First one is only an actual image, and rest of them
are empty text files which is a trick for reducing total size in
tacker repo. So, text files are not replaced, but renamed simply.
* Update functional tests verifying images. A refactor of defining
package size is included.
* Update devstack script, and helper scripts under `contrib` dir.
* Update API references and documents describing usage of the image.
Signed-off-by: Yasufumi Ogawa <firstname.lastname@example.org>
Implements: blueprint update-cirros-052
Current ChangeExternalConnectivity operation only supports
user data case, we will improve such operation to support
Implements: blueprint support-change-external-connectivity
The following patch will fix the issue wherein if VM name
and AutoScalingGroup name are the same name, unhealthy is
set for both and both are target for healing.
The fix will be in Instantiation part since the root cause
of the issue is that Nova::Server is not included as resource
I have confirmed that the FT results are incorrect.
This is not a problem with the code, but with the test parameters.
This patch will improve the test code so that the correct results are
* This patch is not a bug fixed but an improvement.
- Fixed the mismatch between VNFD and HOT resource information.
- Fixed the parameter of the external network resource information
Grant for 3rd-party NFVO is improved. Process related to placement
constraints is supported. This enables interoperability with
3rd-party NFVO which enables them to deploy each VM in the VNF after
setting the zone information based on the grant response from NFVO.
Co-authored-By: Yasufumi Ogawa <email@example.com>
Implements: blueprint support-placement-constraints
This feature will enable the client to use this API to perform the
- allow client to change the external connectivity of a VNF
- VNFM support to change port/network
- VNFM also supports to change ip address/mac address/
The operations provided through additional attributes are below:
- VNF instances (GET)
- Individual VNF Instances (GET)
- Notification Endpoint (POST)
Modified .zuul.yaml to disable image_volume_cache, since cache
Volume remains during Terminate implementation and responds to
Heat Stack deletion failure without erasing volume type.
Implements: blueprint support-change-external-connectivity
This functionality will enable tacker to test the
notification during register Subscription.
if VNFM cannot receive success response
(204 no content), VNFM will be failed Subscription
- As sending management of this API, we apply
(retry_num,retry_wait, there are defined in
- As sending control of this API, we add new
- We modify(add) Notification(GET) process to
Implements: blueprint support-fundamental-lcm
In this implementation, tacker support only "fixedAddresses" parameter.
This patch support "numDynamicAddresses" parameter in
IpOverEthernetAddress, modify to be accepted API Request.
*In this patch, support for addressRange has been postponed.
It is defined into ipAddress parameter by
ETSI SOL003 -> v2.6.1 -> 220.127.116.11c.
And, modify the LCM user data route so that it can use
This feature will enable the client to use an API
to mark a VNF Lifecycle Management Operation
occurrence as "finally failed".
It is based on the ETSI NFV specification where
"FAILED" is "finally failed". Once the operation
is marked as "finally failed",it can no longer be
retried or rolled back anymore.
- As defined in SOL003, operation status can
change from FAILED_TEMP to FAILED.
- When VNFM receive this API , VNFM send
Notification(POST) to requested node.
- Regarding support of Fail Response parameter,
will provide with separated patch.
Implements: blueprint support-error-handling
* Enable VNF vendors to customize configuration methods
for applications via MgmtDriver
* Load LCM interface from vnfd file
* Call LCM methods from vnflcm_driver
* Fix unitentional mgmt_calls in vnflcm_driver._scale_vnf_pre
* For UT, sqlalchemy is used, which calls functions in pymysql.py,
so the dependency module PyMySQL needs to be added.
* The previous implementation of ``cfg.CONF.tacker.infra_driver``
depends on the default config in legacy ``VNFMPlugin`` class in
tacker/vnfm/plugin.py, so we need to fix this here.
* Due to drop unitentional mgmt_calls, it makes compatibility breakage
for some scale actions calls and third party driver support. This
backward incompatibility was decided to keep since it is not long
time since last change(Victoria release)
fails because of timeout of heat stack creation.
This patch increases timeout value of heat stack creation.
This patch add a kubernetes cluster for the kubernetes related
functional tests of the VNF LCM in the zuul environment.
There is no impact to the existing jobs because this patch only
add a new job, however we may need to watch the load on the Zuul
environment due to its parallel jobs.
A new node-set consists of four nodes;
* Controller: Keystone, Nova, Neutron, Glance, Cinder, Octavia,
MySQL, MQ, ETCD
* Controller-tacker: Tacker, Tacker-conductor
* Controller-k8s: kuryr-k8s, kuryr-CNI, k8s-api, kubelet
* Compute: Nova-compute
All kubernetes resources are created on the controller-k8s node.
This patch includes the following changes:
* Added a execution command for the functional test of related
k8s for the VNF LCM in tox.ini.
* Registered a vim of the `kubernetes` type by ansible. Also
added related materials.
* Moved the functional test files for k8s to other new directory.
* Fixed a minor invalid definition in the definition file used
for functional testing.
This patch is provisional workaround for bug .
I think possible ways to fix the bug are using the kubernetes
python client v11.0.0. However, as a other issue, the
ControllerRevision resources is not delete and remain. The
problem of resources remaining cannot be solved immediately.
Therefore, add a skip decorator to skip the Functional Test of
ControllerRevision as a workaround.
The added skip decorator will be removed after this issue
FT execution time in Zuul is about to exceed its timeout. This patch
make the existing dsvm-functional FT job to be separated to two jobs;
This patch fixes the unstable FT results in zuul FT jobs:
``depends_on`` in BaseHOT makes stable deletion of stacks, otherwise
Heat fails to delete subnets due to missing dependency with ports.
"--concurrency 1" enables to run test without any VMs on hosts,
otherwise Blazar fails to add hosts into the freepool.
* Split the FT directory to legacy and sol
* Create new tox envs for FT of legacy and of SOL
* Relative path are changed due to the new directory structure
* Add ``depends_on` parameter in BaseHOT
* Set "--concurrency 1" for both of the new FT jobs