Add reservation support
Added reservation policy support. With reservation policy user can use the reserved resources by blazar to create VNF. Depends-On:I2b989a49ac3447995a82ddb7193bf478bb847b73 Implements: blueprint reservation-vnfm Change-Id: Ia6a87894ba219c045140e8e65e03f87509bbdb6d
This commit is contained in:
parent
bea4922a53
commit
2595cc112f
@ -38,6 +38,8 @@
|
||||
timeout: 9000
|
||||
required-projects:
|
||||
- openstack/aodh
|
||||
- openstack/blazar
|
||||
- openstack/blazar-nova
|
||||
- openstack/horizon
|
||||
- openstack/barbican
|
||||
- openstack/ceilometer
|
||||
@ -46,6 +48,7 @@
|
||||
- openstack/mistral-dashboard
|
||||
- openstack/networking-sfc
|
||||
- openstack/python-barbicanclient
|
||||
- openstack/python-blazarclient
|
||||
- openstack/python-mistralclient
|
||||
- openstack/python-tackerclient
|
||||
- openstack/tacker
|
||||
@ -74,6 +77,7 @@
|
||||
barbican: https://git.openstack.org/openstack/barbican
|
||||
mistral: https://git.openstack.org/openstack/mistral
|
||||
tacker: https://git.openstack.org/openstack/tacker
|
||||
blazar: https://git.openstack.org/openstack/blazar
|
||||
devstack_services:
|
||||
horizon: false
|
||||
swift: false
|
||||
|
@ -99,6 +99,9 @@ The **local.conf** file of all-in-one mode from [#f2]_ is shown as below:
|
||||
enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer master
|
||||
enable_plugin aodh https://git.openstack.org/openstack/aodh master
|
||||
|
||||
# Blazar
|
||||
enable_plugin blazar https://github.com/openstack/blazar.git master
|
||||
|
||||
# Tacker
|
||||
enable_plugin tacker https://git.openstack.org/openstack/tacker master
|
||||
|
||||
|
@ -23,3 +23,4 @@ Reference
|
||||
|
||||
mistral_workflows_usage_guide.rst
|
||||
block_storage_usage_guide.rst
|
||||
reservation_policy_usage_guide.rst
|
||||
|
513
doc/source/reference/reservation_policy_usage_guide.rst
Normal file
513
doc/source/reference/reservation_policy_usage_guide.rst
Normal file
@ -0,0 +1,513 @@
|
||||
..
|
||||
Copyright 2018 NTT DATA
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
not use this file except in compliance with the License. You may obtain
|
||||
a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
License for the specific language governing permissions and limitations
|
||||
under the License.
|
||||
|
||||
===================================
|
||||
VNF scaling with reserved resources
|
||||
===================================
|
||||
|
||||
Tacker allows you to configure reserved compute resources in reservation
|
||||
policy. The compute resources should be first reserved in the OpenStack
|
||||
``Blazar`` service by creating leases which can then be configured in the
|
||||
VNFD template.
|
||||
|
||||
TOSCA schema for reservation policy
|
||||
-----------------------------------
|
||||
|
||||
Tacker defines TOSCA schema for the reservation policy as given below:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
tosca.policies.tacker.Reservation:
|
||||
derived_from: tosca.policies.Reservation
|
||||
reservation:
|
||||
start_actions:
|
||||
type: map
|
||||
entry_schema:
|
||||
type: string
|
||||
required: true
|
||||
before_end_actions:
|
||||
type: map
|
||||
entry_schema:
|
||||
type: string
|
||||
required: true
|
||||
end_actions:
|
||||
type: map
|
||||
entry_schema:
|
||||
type: string
|
||||
required: true
|
||||
properties:
|
||||
lease_id:
|
||||
type: string
|
||||
required: true
|
||||
|
||||
Following TOSCA snippet shows VNFD template using reservation policy.
|
||||
In this policy, you can see there are three different types of actions.
|
||||
|
||||
#. start_actions
|
||||
|
||||
#. before_end_actions
|
||||
|
||||
#. end_actions
|
||||
|
||||
In these actions, you can configure multiple actions but scaling policy is
|
||||
mandatory in start_actions and one of before_end_actions or end_actions.
|
||||
The scaling policy configured in the start_actions will be scaling-out policy
|
||||
so configure max_instances as per the compute resources reserved in the Blazar
|
||||
service and the scaling policy configured in either of before_end_actions or
|
||||
end_actions will be scaling-in policy so configure min_instances to 0.
|
||||
Also, `default_instances` should be set to 0 because we don't want VDUs until
|
||||
tacker receives the lease start trigger from Blazar through Aodh service.
|
||||
The parameter `increment` should also be set equal to `max_instances` as
|
||||
tacker will receive lease start trigger only once during the lifecycle
|
||||
of a lease.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
policies:
|
||||
|
||||
- RSV:
|
||||
type: tosca.policies.tacker.Reservation
|
||||
reservation:
|
||||
start_actions: [SP_RSV, log]
|
||||
before_end_actions: [SP_RSV]
|
||||
end_actions: [noop]
|
||||
properties:
|
||||
lease_id: { get_input: lease_id }
|
||||
- SP_RSV:
|
||||
type: tosca.policies.tacker.Scaling
|
||||
properties:
|
||||
increment: 2
|
||||
cooldown: 120
|
||||
min_instances: 0
|
||||
max_instances: 2
|
||||
default_instances: 0
|
||||
targets: [VDU1]
|
||||
|
||||
|
||||
Installation and configurations
|
||||
-------------------------------
|
||||
|
||||
1. You need Blazar, ceilometer and Aodh OpenStack services.
|
||||
|
||||
2. Modify the below configuration files:
|
||||
|
||||
/etc/blazar/blazar.conf:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
[oslo_messaging_notifications]
|
||||
driver = messaging, log
|
||||
|
||||
/etc/ceilometer/event_pipeline.yaml:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
sinks:
|
||||
- name: event_sink
|
||||
transformers:
|
||||
publishers:
|
||||
- gnocchi://?archive_policy=low&filter_project=gnocchi_swift
|
||||
- notifier://
|
||||
- notifier://?topic=alarm.all
|
||||
|
||||
/etc/ceilometer/event_definitions.yaml:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
- event_type: lease.event.start_lease
|
||||
traits: &lease_traits
|
||||
lease_id:
|
||||
fields: payload.lease_id
|
||||
project_id:
|
||||
fields: payload.project_id
|
||||
user_id:
|
||||
fields: payload.user_id
|
||||
start_date:
|
||||
fields: payload.start_date
|
||||
end_date:
|
||||
fields: payload.end_date
|
||||
- event_type: lease.event.before_end_lease
|
||||
traits: *lease_traits
|
||||
- event_type: lease.event.end_lease
|
||||
traits: *lease_traits
|
||||
|
||||
|
||||
Deploying reservation tosca template with tacker
|
||||
------------------------------------------------
|
||||
|
||||
When reservation resource type is virtual:instance
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
1. Create a lease in blazar for instance reservation:
|
||||
|
||||
.. sourcecode:: console
|
||||
|
||||
$ blazar lease-create --reservation resource_type=virtual:instance,vcpus=1,memory_mb=1024,disk_gb=20,amount=0,affinity=False
|
||||
--start-date "2019-04-24 20:00" --end-date "2019-07-09 21:00" lease-1
|
||||
|
||||
+--------------+-----------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+-----------------------------------------------------------------+
|
||||
| created_at | 2018-12-10 07:44:46 |
|
||||
| degraded | False |
|
||||
| end_date | 2019-07-09T21:00:00.000000 |
|
||||
| events | { |
|
||||
| | "status": "UNDONE", |
|
||||
| | "lease_id": "aca14613-2bed-480e-aefe-97fa02813fcf", |
|
||||
| | "event_type": "start_lease", |
|
||||
| | "created_at": "2018-12-10 07:44:49", |
|
||||
| | "updated_at": null, |
|
||||
| | "time": "2019-04-24T20:00:00.000000", |
|
||||
| | "id": "038c882a-1c9e-4785-aab0-07a6898653cf" |
|
||||
| | } |
|
||||
| | { |
|
||||
| | "status": "UNDONE", |
|
||||
| | "lease_id": "aca14613-2bed-480e-aefe-97fa02813fcf", |
|
||||
| | "event_type": "before_end_lease", |
|
||||
| | "created_at": "2018-12-10 07:44:49", |
|
||||
| | "updated_at": null, |
|
||||
| | "time": "2019-07-09T20:00:00.000000", |
|
||||
| | "id": "607fb807-55e1-44ff-927e-64a4ec71b0f1" |
|
||||
| | } |
|
||||
| | { |
|
||||
| | "status": "UNDONE", |
|
||||
| | "lease_id": "aca14613-2bed-480e-aefe-97fa02813fcf", |
|
||||
| | "event_type": "end_lease", |
|
||||
| | "created_at": "2018-12-10 07:44:49", |
|
||||
| | "updated_at": null, |
|
||||
| | "time": "2019-07-09T21:00:00.000000", |
|
||||
| | "id": "fd6b1f91-bfc8-49d8-94a7-5136ee2fdaee" |
|
||||
| | } |
|
||||
| id | aca14613-2bed-480e-aefe-97fa02813fcf |
|
||||
| name | lease-1 |
|
||||
| project_id | 683322bea7154651b18792b59df67d4e |
|
||||
| reservations | { |
|
||||
| | "status": "pending", |
|
||||
| | "memory_mb": 1024, |
|
||||
| | "lease_id": "aca14613-2bed-480e-aefe-97fa02813fcf", |
|
||||
| | "resource_properties": "", |
|
||||
| | "disk_gb": 10, |
|
||||
| | "resource_id": "bb335cc1-770d-4251-90d8-8f9ea95dac56", |
|
||||
| | "created_at": "2018-12-10 07:44:46", |
|
||||
| | "updated_at": "2018-12-10 07:44:49", |
|
||||
| | "missing_resources": false, |
|
||||
| | "server_group_id": "589b014e-2a68-48b1-87ee-4e9054560206", |
|
||||
| | "amount": 1, |
|
||||
| | "affinity": false, |
|
||||
| | "flavor_id": "edcc0e22-1f7f-4d57-abe4-aeb0775cbd36", |
|
||||
| | "id": "edcc0e22-1f7f-4d57-abe4-aeb0775cbd36", |
|
||||
| | "aggregate_id": 6, |
|
||||
| | "vcpus": 1, |
|
||||
| | "resource_type": "virtual:instance", |
|
||||
| | "resources_changed": false |
|
||||
| | } |
|
||||
| start_date | 2019-04-24T20:00:00.000000 |
|
||||
| status | PENDING |
|
||||
| trust_id | 080f059dabbb4cb0a6398743abcc3224 |
|
||||
| updated_at | 2018-12-10 07:44:49 |
|
||||
| user_id | c42317bee82940509427c63410fd058a |
|
||||
+--------------+-----------------------------------------------------------------+
|
||||
|
||||
..
|
||||
|
||||
2. Replace the flavor, lease_id and server_group_id value in the parameter file
|
||||
given for reservation with the lease response flavor, lease_id and
|
||||
server_group_id value.
|
||||
Ref:
|
||||
``samples/tosca-templates/vnfd/tosca-vnfd-instance-reservation-param-values.yaml``
|
||||
|
||||
.. note::
|
||||
The `server_group_id` parameter should be specified in VDU section only
|
||||
when reservation resource type is `virtual:instance`. Operator shouldn't
|
||||
configure both placement policy under policies and server_group_id in VDU
|
||||
in VNFD template otherwise the server_group_id specified in VDU will be
|
||||
superseded by the server group that will be created by heat for placement
|
||||
policy.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
{
|
||||
|
||||
flavor: 'edcc0e22-1f7f-4d57-abe4-aeb0775cbd36',
|
||||
lease_id: 'aca14613-2bed-480e-aefe-97fa02813fcf',
|
||||
resource_type: 'virtual_instance',
|
||||
server_group_id: '8b01bdf8-a47c-49ea-96f1-3504fccfc9d4',
|
||||
|
||||
}
|
||||
|
||||
``Sample tosca-template``:
|
||||
|
||||
.. sourcecode:: yaml
|
||||
|
||||
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
|
||||
|
||||
description: VNF TOSCA template with flavor input parameters
|
||||
|
||||
metadata:
|
||||
template_name: sample-tosca-vnfd-instance-reservation
|
||||
|
||||
topology_template:
|
||||
inputs:
|
||||
flavor:
|
||||
type: string
|
||||
description: Flavor Information
|
||||
|
||||
lease_id:
|
||||
type: string
|
||||
description: lease id
|
||||
|
||||
resource_type:
|
||||
type: string
|
||||
description: reservation resource type
|
||||
|
||||
server_group_id:
|
||||
type: string
|
||||
description: server group id
|
||||
|
||||
node_templates:
|
||||
VDU1:
|
||||
type: tosca.nodes.nfv.VDU.Tacker
|
||||
properties:
|
||||
image: cirros-0.4.0-x86_64-disk
|
||||
flavor: { get_input: flavor }
|
||||
reservation_metadata:
|
||||
resource_type: { get_input: resource_type }
|
||||
id: { get_input: server_group_id }
|
||||
|
||||
CP1:
|
||||
type: tosca.nodes.nfv.CP.Tacker
|
||||
properties:
|
||||
management: true
|
||||
order: 0
|
||||
anti_spoofing_protection: false
|
||||
requirements:
|
||||
- virtualLink:
|
||||
node: VL1
|
||||
- virtualBinding:
|
||||
node: VDU1
|
||||
|
||||
VL1:
|
||||
type: tosca.nodes.nfv.VL
|
||||
properties:
|
||||
network_name: net_mgmt
|
||||
vendor: Tacker
|
||||
|
||||
|
||||
policies:
|
||||
- RSV:
|
||||
type: tosca.policies.tacker.Reservation
|
||||
reservation:
|
||||
start_actions: [SP_RSV]
|
||||
before_end_actions: [SP_RSV]
|
||||
end_actions: [noop]
|
||||
properties:
|
||||
lease_id: { get_input: lease_id }
|
||||
- SP_RSV:
|
||||
type: tosca.policies.tacker.Scaling
|
||||
properties:
|
||||
increment: 2
|
||||
cooldown: 120
|
||||
min_instances: 0
|
||||
max_instances: 2
|
||||
default_instances: 0
|
||||
targets: [VDU1]
|
||||
|
||||
..
|
||||
|
||||
``Scaling process``
|
||||
|
||||
After the lease lifecycle begins in the Blazar service, tacker will receive a
|
||||
start_lease event at ``2019-04-24T20:00:00``. Tacker will start scaling-out
|
||||
process and you should notice VDUs will be created as per the ``increment``
|
||||
value.
|
||||
Similarly, when before_end_lease event is triggered at ``2019-07-09T20:00``,
|
||||
tacker will start scaling-in process in which VDUs will be deleted as per the
|
||||
``increment`` value.
|
||||
|
||||
When reservation resource type is physical:host
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
1. Create a lease for compute host reservation:
|
||||
|
||||
.. sourcecode:: console
|
||||
|
||||
$ blazar lease-create --physical-reservation min=1,max=1,hypervisor_properties='[">=", "$vcpus", "2"]' --start-date
|
||||
"2019-04-08 12:00" --end-date "2019-07-09 12:00" lease-1
|
||||
|
||||
+--------------+--------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+--------------------------------------------------------------+
|
||||
| created_at | 2018-12-10 07:42:44 |
|
||||
| degraded | False |
|
||||
| end_date | 2019-07-09T12:00:00.000000 |
|
||||
| events | { |
|
||||
| | "status": "UNDONE", |
|
||||
| | "lease_id": "5caba925-b591-48d9-bafb-6b2b1fc1c934", |
|
||||
| | "event_type": "before_end_lease", |
|
||||
| | "created_at": "2018-12-10 07:42:46", |
|
||||
| | "updated_at": null, |
|
||||
| | "time": "2019-07-09T11:00:00.000000", |
|
||||
| | "id": "62682a3a-07fa-49f9-8f95-5b1d8ea49a7f" |
|
||||
| | } |
|
||||
| | { |
|
||||
| | "status": "UNDONE", |
|
||||
| | "lease_id": "5caba925-b591-48d9-bafb-6b2b1fc1c934", |
|
||||
| | "event_type": "end_lease", |
|
||||
| | "created_at": "2018-12-10 07:42:46", |
|
||||
| | "updated_at": null, |
|
||||
| | "time": "2019-07-09T12:00:00.000000", |
|
||||
| | "id": "9f98f8a3-3154-4e8f-b27e-8f61646110d2" |
|
||||
| | } |
|
||||
| | { |
|
||||
| | "status": "UNDONE", |
|
||||
| | "lease_id": "5caba925-b591-48d9-bafb-6b2b1fc1c934", |
|
||||
| | "event_type": "start_lease", |
|
||||
| | "created_at": "2018-12-10 07:42:46", |
|
||||
| | "updated_at": null, |
|
||||
| | "time": "2019-04-08T12:00:00.000000", |
|
||||
| | "id": "c9cd4310-ba8e-41da-a6a0-40dc38702fab" |
|
||||
| | } |
|
||||
| id | 5caba925-b591-48d9-bafb-6b2b1fc1c934 |
|
||||
| name | lease-1 |
|
||||
| project_id | 683322bea7154651b18792b59df67d4e |
|
||||
| reservations | { |
|
||||
| | "status": "pending", |
|
||||
| | "before_end": "default", |
|
||||
| | "lease_id": "5caba925-b591-48d9-bafb-6b2b1fc1c934", |
|
||||
| | "resource_id": "1c05b68f-a94a-4c64-8010-745c3d51dcd8", |
|
||||
| | "max": 1, |
|
||||
| | "created_at": "2018-12-10 07:42:44", |
|
||||
| | "min": 1, |
|
||||
| | "updated_at": "2018-12-10 07:42:46", |
|
||||
| | "missing_resources": false, |
|
||||
| | "hypervisor_properties": "[\">=\", \"$vcpus\", \"2\"]", |
|
||||
| | "resource_properties": "", |
|
||||
| | "id": "c56778a4-028c-4425-8e99-babc049de9dc", |
|
||||
| | "resource_type": "physical:host", |
|
||||
| | "resources_changed": false |
|
||||
| | } |
|
||||
| start_date | 2019-04-08T12:00:00.000000 |
|
||||
| status | PENDING |
|
||||
| trust_id | dddffafc804c4063898f0a5d2a6d8709 |
|
||||
| updated_at | 2018-12-10 07:42:46 |
|
||||
| user_id | c42317bee82940509427c63410fd058a |
|
||||
+--------------+--------------------------------------------------------------+
|
||||
|
||||
..
|
||||
|
||||
2. Replace the flavor with reservation in tosca-tempate given for reservation
|
||||
policy as below:
|
||||
Ref:
|
||||
``samples/tosca-templates/vnfd/tosca-vnfd-host-reservation.yaml``
|
||||
|
||||
.. note::
|
||||
reservation id will be used only when reservation resource type is
|
||||
|
||||
physical:host.
|
||||
|
||||
Add lease_id and reservation id in the parameter file.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
{
|
||||
|
||||
resource_type: 'physical_host',
|
||||
reservation_id: 'c56778a4-028c-4425-8e99-babc049de9dc',
|
||||
lease_id: '5caba925-b591-48d9-bafb-6b2b1fc1c934',
|
||||
|
||||
}
|
||||
|
||||
``Sample tosca-template``:
|
||||
|
||||
.. sourcecode:: yaml
|
||||
|
||||
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
|
||||
|
||||
description: VNF TOSCA template with reservation_id input parameters
|
||||
|
||||
metadata:
|
||||
template_name: sample-tosca-vnfd-host-reservation
|
||||
|
||||
topology_template:
|
||||
inputs:
|
||||
resource_type:
|
||||
type: string
|
||||
description: reservation resource type
|
||||
|
||||
reservation_id:
|
||||
type: string
|
||||
description: Reservation Id Information
|
||||
|
||||
lease_id:
|
||||
type: string
|
||||
description: lease id
|
||||
|
||||
node_templates:
|
||||
VDU1:
|
||||
type: tosca.nodes.nfv.VDU.Tacker
|
||||
properties:
|
||||
image: cirros-0.4.0-x86_64-disk
|
||||
reservation_metadata:
|
||||
resource_type: { get_input: resource_type }
|
||||
id: { get_input: reservation_id }
|
||||
|
||||
CP1:
|
||||
type: tosca.nodes.nfv.CP.Tacker
|
||||
properties:
|
||||
management: true
|
||||
order: 0
|
||||
anti_spoofing_protection: false
|
||||
requirements:
|
||||
- virtualLink:
|
||||
node: VL1
|
||||
- virtualBinding:
|
||||
node: VDU1
|
||||
|
||||
VL1:
|
||||
type: tosca.nodes.nfv.VL
|
||||
properties:
|
||||
network_name: net_mgmt
|
||||
vendor: Tacker
|
||||
|
||||
policies:
|
||||
- RSV:
|
||||
type: tosca.policies.tacker.Reservation
|
||||
reservation:
|
||||
start_actions: [SP_RSV]
|
||||
before_end_actions: [noop]
|
||||
end_actions: [SP_RSV]
|
||||
properties:
|
||||
lease_id: { get_input: lease_id }
|
||||
- SP_RSV:
|
||||
type: tosca.policies.tacker.Scaling
|
||||
properties:
|
||||
increment: 2
|
||||
cooldown: 120
|
||||
min_instances: 0
|
||||
max_instances: 2
|
||||
default_instances: 0
|
||||
targets: [VDU1]
|
||||
|
||||
..
|
||||
|
||||
``Scaling process``
|
||||
|
||||
After the lease lifecycle begins in the Blazar service, tacker will receive a
|
||||
start_lease event at ``2019-04-08T12:00:00``. Tacker will start scaling-out
|
||||
process and you should notice VDUs will be created as per the ``increment``
|
||||
value.
|
||||
Similarly, when end_lease event is triggered at ``2019-07-09T12:00``, tacker
|
||||
will start scaling-in process in which VDUs will be deleted as per the
|
||||
``increment`` value.
|
@ -36,7 +36,7 @@ futurist==1.6.0
|
||||
google-auth==1.4.1
|
||||
greenlet==0.4.13
|
||||
hacking==0.12.0
|
||||
heat-translator==1.1.0
|
||||
heat-translator==1.3.0
|
||||
idna==2.6
|
||||
imagesize==1.0.0
|
||||
ipaddress==1.0.19
|
||||
|
@ -0,0 +1,4 @@
|
||||
---
|
||||
features:
|
||||
- Added reservation policy support that will help to scale in/out the VDUs
|
||||
with reserved compute resources.
|
@ -39,7 +39,7 @@ openstackdocstheme>=1.18.1 # Apache-2.0
|
||||
python-neutronclient>=6.7.0 # Apache-2.0
|
||||
python-novaclient>=9.1.0 # Apache-2.0
|
||||
tosca-parser>=0.8.1 # Apache-2.0
|
||||
heat-translator>=1.1.0 # Apache-2.0
|
||||
heat-translator>=1.3.0 # Apache-2.0
|
||||
cryptography>=2.1 # BSD/Apache-2.0
|
||||
paramiko>=2.0.0 # LGPLv2.1+
|
||||
pyroute2>=0.4.21;sys_platform!='win32' # Apache-2.0 (+ dual licensed GPL2)
|
||||
|
@ -0,0 +1,5 @@
|
||||
{
|
||||
lease_id: '8b01bdf8-a47c-49ea-96f1-3504fccfc9d4',
|
||||
resource_type: 'physical_host',
|
||||
reservation_id: '707e4f81-aedd-44cd-a445-fd18a47d0228',
|
||||
}
|
103
samples/tosca-templates/vnfd/tosca-vnfd-host-reservation.yaml
Normal file
103
samples/tosca-templates/vnfd/tosca-vnfd-host-reservation.yaml
Normal file
@ -0,0 +1,103 @@
|
||||
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
|
||||
|
||||
description: VNF TOSCA template with reservation id input parameters
|
||||
|
||||
metadata:
|
||||
template_name: sample-tosca-vnfd-host-reservation
|
||||
|
||||
topology_template:
|
||||
inputs:
|
||||
lease_id:
|
||||
type: string
|
||||
description: lease id
|
||||
|
||||
resource_type:
|
||||
type: string
|
||||
description: reservation resource type
|
||||
|
||||
reservation_id:
|
||||
type: string
|
||||
description: reservation id
|
||||
|
||||
node_templates:
|
||||
VDU1:
|
||||
type: tosca.nodes.nfv.VDU.Tacker
|
||||
properties:
|
||||
image: cirros-0.4.0-x86_64-disk
|
||||
flavor: m1.tiny
|
||||
reservation_metadata:
|
||||
resource_type: { get_input: resource_type }
|
||||
id: { get_input: reservation_id }
|
||||
|
||||
CP1:
|
||||
type: tosca.nodes.nfv.CP.Tacker
|
||||
properties:
|
||||
management: true
|
||||
order: 0
|
||||
anti_spoofing_protection: false
|
||||
requirements:
|
||||
- virtualLink:
|
||||
node: VL1
|
||||
- virtualBinding:
|
||||
node: VDU1
|
||||
|
||||
CP2:
|
||||
type: tosca.nodes.nfv.CP.Tacker
|
||||
properties:
|
||||
order: 1
|
||||
anti_spoofing_protection: false
|
||||
requirements:
|
||||
- virtualLink:
|
||||
node: VL2
|
||||
- virtualBinding:
|
||||
node: VDU1
|
||||
|
||||
CP3:
|
||||
type: tosca.nodes.nfv.CP.Tacker
|
||||
properties:
|
||||
order: 2
|
||||
anti_spoofing_protection: false
|
||||
requirements:
|
||||
- virtualLink:
|
||||
node: VL3
|
||||
- virtualBinding:
|
||||
node: VDU1
|
||||
|
||||
VL1:
|
||||
type: tosca.nodes.nfv.VL
|
||||
properties:
|
||||
network_name: net_mgmt
|
||||
vendor: Tacker
|
||||
|
||||
VL2:
|
||||
type: tosca.nodes.nfv.VL
|
||||
|
||||
properties:
|
||||
network_name: net0
|
||||
vendor: Tacker
|
||||
|
||||
VL3:
|
||||
type: tosca.nodes.nfv.VL
|
||||
properties:
|
||||
network_name: net1
|
||||
vendor: Tacker
|
||||
|
||||
|
||||
policies:
|
||||
- RSV:
|
||||
type: tosca.policies.tacker.Reservation
|
||||
reservation:
|
||||
start_actions: [SP_RSV]
|
||||
before_end_actions: [SP_RSV]
|
||||
end_actions: [noop]
|
||||
properties:
|
||||
lease_id: { get_input: lease_id }
|
||||
- SP_RSV:
|
||||
type: tosca.policies.tacker.Scaling
|
||||
properties:
|
||||
increment: 2
|
||||
cooldown: 120
|
||||
min_instances: 0
|
||||
max_instances: 2
|
||||
default_instances: 0
|
||||
targets: [VDU1]
|
@ -0,0 +1,6 @@
|
||||
{
|
||||
flavor: '707e4f81-aedd-44cd-a445-fd18a47d0228',
|
||||
lease_id: '8b01bdf8-a47c-49ea-96f1-3504fccfc9d4',
|
||||
resource_type: 'virtual_instance',
|
||||
server_group_id: '8b01bdf8-a47c-49ea-96f1-3504fccfc9d4',
|
||||
}
|
@ -0,0 +1,107 @@
|
||||
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
|
||||
|
||||
description: VNF TOSCA template with flavor input parameters
|
||||
|
||||
metadata:
|
||||
template_name: sample-tosca-vnfd-instance-reservation
|
||||
|
||||
topology_template:
|
||||
inputs:
|
||||
flavor:
|
||||
type: string
|
||||
description: Flavor Information
|
||||
|
||||
lease_id:
|
||||
type: string
|
||||
description: lease id
|
||||
|
||||
resource_type:
|
||||
type: string
|
||||
description: reservation resource type
|
||||
|
||||
server_group_id:
|
||||
type: string
|
||||
description: server group id
|
||||
|
||||
node_templates:
|
||||
VDU1:
|
||||
type: tosca.nodes.nfv.VDU.Tacker
|
||||
properties:
|
||||
image: cirros-0.4.0-x86_64-disk
|
||||
flavor: { get_input: flavor }
|
||||
reservation_metadata:
|
||||
resource_type: { get_input: resource_type }
|
||||
id: { get_input: server_group_id }
|
||||
|
||||
CP1:
|
||||
type: tosca.nodes.nfv.CP.Tacker
|
||||
properties:
|
||||
management: true
|
||||
order: 0
|
||||
anti_spoofing_protection: false
|
||||
requirements:
|
||||
- virtualLink:
|
||||
node: VL1
|
||||
- virtualBinding:
|
||||
node: VDU1
|
||||
|
||||
CP2:
|
||||
type: tosca.nodes.nfv.CP.Tacker
|
||||
properties:
|
||||
order: 1
|
||||
anti_spoofing_protection: false
|
||||
requirements:
|
||||
- virtualLink:
|
||||
node: VL2
|
||||
- virtualBinding:
|
||||
node: VDU1
|
||||
|
||||
CP3:
|
||||
type: tosca.nodes.nfv.CP.Tacker
|
||||
properties:
|
||||
order: 2
|
||||
anti_spoofing_protection: false
|
||||
requirements:
|
||||
- virtualLink:
|
||||
node: VL3
|
||||
- virtualBinding:
|
||||
node: VDU1
|
||||
|
||||
VL1:
|
||||
type: tosca.nodes.nfv.VL
|
||||
properties:
|
||||
network_name: net_mgmt
|
||||
vendor: Tacker
|
||||
|
||||
VL2:
|
||||
type: tosca.nodes.nfv.VL
|
||||
|
||||
properties:
|
||||
network_name: net0
|
||||
vendor: Tacker
|
||||
|
||||
VL3:
|
||||
type: tosca.nodes.nfv.VL
|
||||
properties:
|
||||
network_name: net1
|
||||
vendor: Tacker
|
||||
|
||||
|
||||
policies:
|
||||
- RSV:
|
||||
type: tosca.policies.tacker.Reservation
|
||||
reservation:
|
||||
start_actions: [SP_RSV]
|
||||
before_end_actions: [SP_RSV]
|
||||
end_actions: [noop]
|
||||
properties:
|
||||
lease_id: { get_input: lease_id }
|
||||
- SP_RSV:
|
||||
type: tosca.policies.tacker.Scaling
|
||||
properties:
|
||||
increment: 2
|
||||
cooldown: 120
|
||||
min_instances: 0
|
||||
max_instances: 2
|
||||
default_instances: 0
|
||||
targets: [VDU1]
|
@ -53,6 +53,9 @@ POLICY_SCALING_ACTIONS = (ACTION_SCALE_OUT,
|
||||
POLICY_ACTIONS = {POLICY_SCALING: POLICY_SCALING_ACTIONS}
|
||||
POLICY_ALARMING = 'tosca.policies.tacker.Alarming'
|
||||
VALID_POLICY_TYPES = [POLICY_SCALING, POLICY_ALARMING]
|
||||
POLICY_RESERVATION = 'tosca.policies.tacker.Reservation'
|
||||
RESERVATION_POLICY_ACTIONS = ['start_actions',
|
||||
'before_end_actions', 'end_actions']
|
||||
DEFAULT_ALARM_ACTIONS = ['respawn', 'log', 'log_and_kill', 'notify']
|
||||
|
||||
RES_TYPE_VNFD = "vnfd"
|
||||
|
@ -12,6 +12,7 @@
|
||||
|
||||
POLICY_ALARMING = 'tosca.policies.tacker.Alarming'
|
||||
DEFAULT_ALARM_ACTIONS = ['respawn', 'log', 'log_and_kill', 'notify']
|
||||
POLICY_RESERVATION = 'tosca.policies.tacker.Reservation'
|
||||
VNF_CIRROS_CREATE_TIMEOUT = 300
|
||||
VNFC_CREATE_TIMEOUT = 600
|
||||
VNF_CIRROS_DELETE_TIMEOUT = 300
|
||||
|
@ -64,6 +64,8 @@ vnfd_alarm_multi_actions_tosca_template = _get_template(
|
||||
nsd_tosca_template = yaml.safe_load(_get_template('tosca_nsd_template.yaml'))
|
||||
vnffgd_wrong_cp_number_template = yaml.safe_load(_get_template(
|
||||
'tosca_vnffgd_wrong_cp_number_template.yaml'))
|
||||
vnfd_instance_reservation_alarm_scale_tosca_template = _get_template(
|
||||
'test_tosca-vnfd-instance-reservation.yaml')
|
||||
|
||||
|
||||
def get_dummy_vnfd_obj():
|
||||
|
@ -0,0 +1,90 @@
|
||||
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
|
||||
|
||||
description: VNF TOSCA template with instance reservation input parameters
|
||||
|
||||
metadata:
|
||||
template_name: sample-tosca-vnfd-instance-reservation
|
||||
|
||||
topology_template:
|
||||
node_templates:
|
||||
VDU1:
|
||||
type: tosca.nodes.nfv.VDU.Tacker
|
||||
properties:
|
||||
image: cirros-0.4.0-x86_64-disk
|
||||
flavor: 'cde27e47-1c88-4bb7-a64e-8d7c69014e4f'
|
||||
reservation_metadata:
|
||||
resource_type: 'virtual_instance'
|
||||
id: '8b01bdf8-a47c-49ea-96f1-3504fccfc9d4'
|
||||
|
||||
CP1:
|
||||
type: tosca.nodes.nfv.CP.Tacker
|
||||
properties:
|
||||
management: true
|
||||
order: 0
|
||||
anti_spoofing_protection: false
|
||||
requirements:
|
||||
- virtualLink:
|
||||
node: VL1
|
||||
- virtualBinding:
|
||||
node: VDU1
|
||||
|
||||
CP2:
|
||||
type: tosca.nodes.nfv.CP.Tacker
|
||||
properties:
|
||||
order: 1
|
||||
anti_spoofing_protection: false
|
||||
requirements:
|
||||
- virtualLink:
|
||||
node: VL2
|
||||
- virtualBinding:
|
||||
node: VDU1
|
||||
|
||||
CP3:
|
||||
type: tosca.nodes.nfv.CP.Tacker
|
||||
properties:
|
||||
order: 2
|
||||
anti_spoofing_protection: false
|
||||
requirements:
|
||||
- virtualLink:
|
||||
node: VL3
|
||||
- virtualBinding:
|
||||
node: VDU1
|
||||
|
||||
VL1:
|
||||
type: tosca.nodes.nfv.VL
|
||||
properties:
|
||||
network_name: net_mgmt
|
||||
vendor: Tacker
|
||||
|
||||
VL2:
|
||||
type: tosca.nodes.nfv.VL
|
||||
|
||||
properties:
|
||||
network_name: net0
|
||||
vendor: Tacker
|
||||
|
||||
VL3:
|
||||
type: tosca.nodes.nfv.VL
|
||||
properties:
|
||||
network_name: net1
|
||||
vendor: Tacker
|
||||
|
||||
|
||||
policies:
|
||||
- RSV:
|
||||
type: tosca.policies.tacker.Reservation
|
||||
reservation:
|
||||
start_actions: [SP_RSV]
|
||||
before_end_actions: [SP_RSV]
|
||||
end_actions: [noop]
|
||||
properties:
|
||||
lease_id: '6ff61be8-91c3-4874-8f1b-128a03a455cb'
|
||||
- SP_RSV:
|
||||
type: tosca.policies.tacker.Scaling
|
||||
properties:
|
||||
increment: 2
|
||||
cooldown: 120
|
||||
min_instances: 0
|
||||
max_instances: 2
|
||||
default_instances: 0
|
||||
targets: [VDU1]
|
@ -0,0 +1,75 @@
|
||||
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
|
||||
|
||||
description: OpenWRT with services
|
||||
|
||||
metadata:
|
||||
template_name: OpenWRT
|
||||
|
||||
topology_template:
|
||||
node_templates:
|
||||
|
||||
VDU1:
|
||||
type: tosca.nodes.nfv.VDU.Tacker
|
||||
properties:
|
||||
image: OpenWRT
|
||||
flavor: m1.tiny
|
||||
reservation_metadata:
|
||||
resource_type: physical_host
|
||||
id: 459e94c9-efcd-4320-abf5-8c18cd82c331
|
||||
config: |
|
||||
param0: key1
|
||||
param1: key2
|
||||
mgmt_driver: openwrt
|
||||
monitoring_policy:
|
||||
name: ping
|
||||
actions:
|
||||
failure: respawn
|
||||
parameters:
|
||||
count: 3
|
||||
interval: 10
|
||||
metadata: {metering.server_group: VDU1-2e3261d9-1}
|
||||
|
||||
CP1:
|
||||
type: tosca.nodes.nfv.CP.Tacker
|
||||
properties:
|
||||
management: true
|
||||
anti_spoofing_protection: false
|
||||
type: sriov
|
||||
requirements:
|
||||
- virtualLink:
|
||||
node: VL1
|
||||
- virtualBinding:
|
||||
node: VDU1
|
||||
|
||||
VL1:
|
||||
type: tosca.nodes.nfv.VL
|
||||
properties:
|
||||
network_name: existing_network_1
|
||||
vendor: Tacker
|
||||
|
||||
policies:
|
||||
- vdu1_placement_policy:
|
||||
type: tosca.policies.tacker.Placement
|
||||
properties:
|
||||
policy: affinity
|
||||
strict: true
|
||||
description: Apply affinity placement policy to the application servers
|
||||
targets: [ VDU1 ]
|
||||
- vdu1_cpu_usage_monitoring_policy:
|
||||
type: tosca.policies.tacker.Alarming
|
||||
triggers:
|
||||
vdu_hcpu_usage_respawning:
|
||||
event_type:
|
||||
type: tosca.events.resource.utilization
|
||||
implementation: Ceilometer
|
||||
metric: cpu_util
|
||||
condition:
|
||||
threshold: 50
|
||||
constraint: utilization greater_than 50%
|
||||
granularity: 60
|
||||
evaluations: 1
|
||||
aggregation_method: mean
|
||||
resource_type: instance
|
||||
comparison_operator: gt
|
||||
metadata: VDU1-2e3261d9-1
|
||||
action: ''
|
@ -17,9 +17,11 @@ from oslo_serialization import jsonutils
|
||||
from oslo_utils import timeutils
|
||||
import testtools
|
||||
|
||||
from tacker import context
|
||||
from tacker.db.common_services import common_services_db_plugin
|
||||
from tacker.plugins.common import constants
|
||||
from tacker.vnfm import monitor
|
||||
from tacker.vnfm import plugin
|
||||
|
||||
MOCK_VNF_ID = 'a737497c-761c-11e5-89c3-9cb6541d805d'
|
||||
MOCK_VNF = {
|
||||
@ -203,3 +205,52 @@ class TestVNFMonitor(testtools.TestCase):
|
||||
test_device_status = test_vnfmonitor._hosting_vnfs[MOCK_VNF_ID][
|
||||
'vnf']['status']
|
||||
self.assertEqual('PENDING_HEAL', test_device_status)
|
||||
|
||||
|
||||
class TestVNFReservationAlarmMonitor(testtools.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
super(TestVNFReservationAlarmMonitor, self).setUp()
|
||||
self.context = context.get_admin_context()
|
||||
self.plugin = plugin.VNFMPlugin
|
||||
|
||||
def test_process_alarm_for_vnf(self):
|
||||
vnf = {'id': 'a737497c-761c-11e5-89c3-9cb6541d805d'}
|
||||
trigger = {'params': {'data': {
|
||||
'alarm_id': 'a737497c-761c-11e5-89c3-9cb6541d805d',
|
||||
'current': 'alarm'}}}
|
||||
test_vnf_reservation_monitor = monitor.VNFReservationAlarmMonitor()
|
||||
response = test_vnf_reservation_monitor.process_alarm_for_vnf(
|
||||
vnf, trigger)
|
||||
self.assertEqual(response, True)
|
||||
|
||||
@mock.patch('tacker.db.common_services.common_services_db_plugin.'
|
||||
'CommonServicesPluginDb.create_event')
|
||||
@mock.patch('tacker.vnfm.plugin.VNFMPlugin.get_vnf_policies')
|
||||
def test_update_vnf_with_alarm(self, mock_get_vnf_policies,
|
||||
mock_db_service):
|
||||
mock_get_vnf_policies.return_value = [
|
||||
{'name': 'SP_RSV', 'type': 'tosca.policies.tacker.Scaling'}]
|
||||
mock_db_service.return_value = {
|
||||
'event_type': 'MONITOR',
|
||||
'resource_id': '9770fa22-747d-426e-9819-057a95cb778c',
|
||||
'timestamp': '2018-10-30 06:01:45.628162',
|
||||
'event_details': {'Alarm URL set successfully': {
|
||||
'start_actions': 'alarm'}},
|
||||
'resource_state': 'CREATE',
|
||||
'id': '4583',
|
||||
'resource_type': 'vnf'}
|
||||
vnf = {'id': 'a737497c-761c-11e5-89c3-9cb6541d805d',
|
||||
'status': 'insufficient_data'}
|
||||
test_vnf_reservation_monitor = monitor.VNFReservationAlarmMonitor()
|
||||
policy_dict = {
|
||||
'type': 'tosca.policies.tacker.Reservation',
|
||||
'reservation': {'before_end_actions': ['SP_RSV'],
|
||||
'end_actions': ['noop'],
|
||||
'start_actions': ['SP_RSV'],
|
||||
'properties': {
|
||||
'lease_id':
|
||||
'ffa079a0-9d6f-411d-ab15-89219c0ee14d'}}}
|
||||
response = test_vnf_reservation_monitor.update_vnf_with_reservation(
|
||||
self.plugin, self.context, vnf, policy_dict)
|
||||
self.assertEqual(len(response.keys()), 3)
|
||||
|
@ -139,6 +139,7 @@ class TestVNFMPlugin(db_base.SqlTestCase):
|
||||
self._stub_get_vim()
|
||||
self._mock_vnf_monitor()
|
||||
self._mock_vnf_alarm_monitor()
|
||||
self._mock_vnf_reservation_monitor()
|
||||
self._insert_dummy_vim()
|
||||
self.vnfm_plugin = plugin.VNFMPlugin()
|
||||
mock.patch('tacker.db.common_services.common_services_db_plugin.'
|
||||
@ -204,6 +205,14 @@ class TestVNFMPlugin(db_base.SqlTestCase):
|
||||
self._mock(
|
||||
'tacker.vnfm.monitor.VNFAlarmMonitor', fake_vnf_alarm_monitor)
|
||||
|
||||
def _mock_vnf_reservation_monitor(self):
|
||||
self._vnf_reservation_mon = mock.Mock(wraps=FakeVNFMonitor())
|
||||
fake_vnf_reservation_monitor = mock.Mock()
|
||||
fake_vnf_reservation_monitor.return_value = self._vnf_reservation_mon
|
||||
self._mock(
|
||||
'tacker.vnfm.monitor.VNFReservationAlarmMonitor',
|
||||
fake_vnf_reservation_monitor)
|
||||
|
||||
def _insert_dummy_vnf_template(self):
|
||||
session = self.context.session
|
||||
vnf_template = vnfm_db.VNFD(
|
||||
@ -1004,3 +1013,11 @@ class TestVNFMPlugin(db_base.SqlTestCase):
|
||||
mock_heal_vdu.assert_called_with(plugin=self.vnfm_plugin,
|
||||
context=self.context, vnf_dict=mock.ANY,
|
||||
heal_request_data_obj=heal_request_data_obj)
|
||||
|
||||
@patch('tacker.db.vnfm.vnfm_db.VNFMPluginDb.get_vnf')
|
||||
def test_create_vnf_trigger_scale_with_reservation(self, mock_get_vnf):
|
||||
dummy_vnf = self._get_dummy_vnf(
|
||||
utils.vnfd_instance_reservation_alarm_scale_tosca_template)
|
||||
mock_get_vnf.return_value = dummy_vnf
|
||||
self._test_create_vnf_trigger(policy_name="start_actions",
|
||||
action_value="SP_RSV-out")
|
||||
|
@ -72,35 +72,54 @@ class TestToscaUtils(testtools.TestCase):
|
||||
self.assertEqual(expected_mgmt_ports, mgmt_ports)
|
||||
|
||||
def test_post_process_template(self):
|
||||
tosca2 = tosca_template.ToscaTemplate(parsed_params={}, a_file=False,
|
||||
yaml_dict_tpl=self.vnfd_dict)
|
||||
toscautils.post_process_template(tosca2)
|
||||
tosca_post_process_tpl = _get_template(
|
||||
'test_tosca_post_process_template.yaml')
|
||||
vnfd_dict = yaml.safe_load(tosca_post_process_tpl)
|
||||
toscautils.updateimports(vnfd_dict)
|
||||
tosca = tosca_template.ToscaTemplate(parsed_params={}, a_file=False,
|
||||
yaml_dict_tpl=vnfd_dict)
|
||||
toscautils.post_process_template(tosca)
|
||||
|
||||
invalidNodes = 0
|
||||
for nt in tosca2.nodetemplates:
|
||||
deletedProperties = 0
|
||||
convertedValues = 0
|
||||
convertedProperties = 0
|
||||
|
||||
for nt in tosca.nodetemplates:
|
||||
if (nt.type_definition.is_derived_from(toscautils.MONITORING) or
|
||||
nt.type_definition.is_derived_from(toscautils.FAILURE) or
|
||||
nt.type_definition.is_derived_from(toscautils.FAILURE) or
|
||||
nt.type_definition.is_derived_from(toscautils.PLACEMENT)):
|
||||
invalidNodes += 1
|
||||
|
||||
if nt.type in toscautils.delpropmap.keys():
|
||||
for prop in toscautils.delpropmap[nt.type]:
|
||||
for p in nt.get_properties_objects():
|
||||
if prop == p.name:
|
||||
deletedProperties += 1
|
||||
|
||||
if nt.type in toscautils.convert_prop_values:
|
||||
for prop in toscautils.convert_prop_values[nt.type].keys():
|
||||
convertmap = toscautils.convert_prop_values[nt.type][prop]
|
||||
for p in nt.get_properties_objects():
|
||||
if (prop == p.name and
|
||||
p.value in convertmap.keys()):
|
||||
convertedValues += 1
|
||||
|
||||
if nt.type in toscautils.convert_prop:
|
||||
for prop in toscautils.convert_prop[nt.type].keys():
|
||||
for p in nt.get_properties_objects():
|
||||
if prop == p.name:
|
||||
convertedProperties += 1
|
||||
|
||||
if nt.name == 'VDU1':
|
||||
vdu1_hints = nt.get_properties().get('scheduler_hints')
|
||||
vdu1_rsv = vdu1_hints.value.get('reservation')
|
||||
|
||||
self.assertEqual(0, invalidNodes)
|
||||
|
||||
deletedProperties = 0
|
||||
if nt.type in toscautils.delpropmap.keys():
|
||||
for prop in toscautils.delpropmap[nt.type]:
|
||||
for p in nt.get_properties_objects():
|
||||
if prop == p.name:
|
||||
deletedProperties += 1
|
||||
|
||||
self.assertEqual(0, deletedProperties)
|
||||
|
||||
convertedProperties = 0
|
||||
if nt.type in toscautils.convert_prop:
|
||||
for prop in toscautils.convert_prop[nt.type].keys():
|
||||
for p in nt.get_properties_objects():
|
||||
if prop == p.name:
|
||||
convertedProperties += 1
|
||||
|
||||
self.assertEqual(0, convertedValues)
|
||||
self.assertEqual(0, convertedProperties)
|
||||
self.assertEqual(vdu1_rsv, '459e94c9-efcd-4320-abf5-8c18cd82c331')
|
||||
|
||||
def test_post_process_heat_template(self):
|
||||
tosca1 = tosca_template.ToscaTemplate(parsed_params={}, a_file=False,
|
||||
|
@ -199,6 +199,19 @@ data_types:
|
||||
type: string
|
||||
required: false
|
||||
|
||||
tosca.datatypes.tacker.VduReservationMetadata:
|
||||
properties:
|
||||
resource_type:
|
||||
# TODO(niraj-singh): Need to add constraints
|
||||
# ``valid_values: [ physical_host, virtual_instance ]``
|
||||
# once Bug #1815755 is fixed.
|
||||
type: string
|
||||
required: true
|
||||
default: virtual_instance
|
||||
id:
|
||||
type: string
|
||||
required: true
|
||||
|
||||
policy_types:
|
||||
tosca.policies.tacker.Placement:
|
||||
derived_from: tosca.policies.Placement
|
||||
@ -338,3 +351,26 @@ policy_types:
|
||||
required: false
|
||||
default: 120
|
||||
description: Wait time (in seconds) between consecutive scaling operations. During the cooldown period, scaling action will be ignored
|
||||
|
||||
tosca.policies.tacker.Reservation:
|
||||
derived_from: tosca.policies.Reservation
|
||||
reservation:
|
||||
start_actions:
|
||||
type: list
|
||||
entry_schema:
|
||||
type: string
|
||||
required: true
|
||||
before_end_actions:
|
||||
type: list
|
||||
entry_schema:
|
||||
type: string
|
||||
required: true
|
||||
end_actions:
|
||||
type: list
|
||||
entry_schema:
|
||||
type: string
|
||||
required: true
|
||||
properties:
|
||||
lease_id:
|
||||
type: string
|
||||
required: true
|
||||
|
@ -270,6 +270,10 @@ node_types:
|
||||
entry_schema:
|
||||
type: string
|
||||
|
||||
reservation_metadata:
|
||||
type: tosca.datatypes.tacker.VduReservationMetadata
|
||||
required: false
|
||||
|
||||
tosca.nodes.nfv.CP.Tacker:
|
||||
derived_from: tosca.nodes.nfv.CP
|
||||
properties:
|
||||
|
@ -20,9 +20,11 @@ import yaml
|
||||
from collections import OrderedDict
|
||||
from oslo_log import log as logging
|
||||
from oslo_utils import uuidutils
|
||||
from tacker.common import exceptions
|
||||
from tacker.common import log
|
||||
from tacker.common import utils
|
||||
from tacker.extensions import vnfm
|
||||
from tacker.plugins.common import constants
|
||||
from toscaparser import properties
|
||||
from toscaparser.utils import yamlparser
|
||||
|
||||
@ -31,6 +33,7 @@ FAILURE = 'tosca.policies.tacker.Failure'
|
||||
LOG = logging.getLogger(__name__)
|
||||
MONITORING = 'tosca.policies.Monitoring'
|
||||
SCALING = 'tosca.policies.Scaling'
|
||||
RESERVATION = 'tosca.policies.Reservation'
|
||||
PLACEMENT = 'tosca.policies.tacker.Placement'
|
||||
TACKERCP = 'tosca.nodes.nfv.CP.Tacker'
|
||||
TACKERVDU = 'tosca.nodes.nfv.VDU.Tacker'
|
||||
@ -200,6 +203,35 @@ def get_vdu_metadata(template, unique_id=None):
|
||||
return metadata
|
||||
|
||||
|
||||
@log.log
|
||||
def get_metadata_for_reservation(template, metadata):
|
||||
"""Method used to add lease_id in metadata
|
||||
|
||||
So that it can be used further while creating query_metadata
|
||||
|
||||
:param template: ToscaTemplate object
|
||||
:param metadata: metadata dict
|
||||
:return: dictionary contains lease_id
|
||||
"""
|
||||
|
||||
metadata.setdefault('reservation', {})
|
||||
input_param_list = template.parsed_params.keys()
|
||||
# if lease_id is passed in the parameter file,
|
||||
# get it from template parsed_params.
|
||||
if 'lease_id' in input_param_list:
|
||||
metadata['reservation']['lease_id'] = template.parsed_params[
|
||||
'lease_id']
|
||||
else:
|
||||
for policy in template.policies:
|
||||
if policy.entity_tpl['type'] == constants.POLICY_RESERVATION:
|
||||
metadata['reservation']['lease_id'] = policy.entity_tpl[
|
||||
'reservation']['properties']['lease_id']
|
||||
break
|
||||
if not uuidutils.is_uuid_like(metadata['reservation']['lease_id']):
|
||||
raise exceptions.Invalid('Invalid UUID for lease_id')
|
||||
return metadata
|
||||
|
||||
|
||||
@log.log
|
||||
def pre_process_alarm_resources(vnf, template, vdu_metadata, unique_id=None):
|
||||
alarm_resources = dict()
|
||||
@ -207,9 +239,19 @@ def pre_process_alarm_resources(vnf, template, vdu_metadata, unique_id=None):
|
||||
alarm_actions = dict()
|
||||
for policy in template.policies:
|
||||
if policy.type_definition.is_derived_from(MONITORING):
|
||||
query_metadata = _process_query_metadata(
|
||||
vdu_metadata, policy, unique_id)
|
||||
alarm_actions = _process_alarm_actions(vnf, policy)
|
||||
query_metadata.update(_process_query_metadata(
|
||||
vdu_metadata, policy, unique_id))
|
||||
alarm_actions.update(_process_alarm_actions(vnf, policy))
|
||||
if policy.type_definition.is_derived_from(RESERVATION):
|
||||
query_metadata.update(_process_query_metadata_reservation(
|
||||
vdu_metadata, policy))
|
||||
alarm_actions.update(_process_alarm_actions_for_reservation(
|
||||
vnf, policy))
|
||||
alarm_resources['event_types'] = {
|
||||
'start_actions': {'event_type': 'lease.event.start_lease'},
|
||||
'before_end_actions': {
|
||||
'event_type': 'lease.event.before_end_lease'},
|
||||
'end_actions': {'event_type': 'lease.event.end_lease'}}
|
||||
alarm_resources['query_metadata'] = query_metadata
|
||||
alarm_resources['alarm_actions'] = alarm_actions
|
||||
return alarm_resources
|
||||
@ -248,6 +290,19 @@ def _process_query_metadata(metadata, policy, unique_id):
|
||||
return query_mtdata
|
||||
|
||||
|
||||
def _process_query_metadata_reservation(metadata, policy):
|
||||
query_metadata = dict()
|
||||
policy_actions = policy.entity_tpl['reservation'].keys()
|
||||
policy_actions.remove('properties')
|
||||
for action in policy_actions:
|
||||
query_template = [{
|
||||
"field": 'traits.lease_id', "op": "eq",
|
||||
"value": metadata['reservation']['lease_id']}]
|
||||
query_metadata[action] = query_template
|
||||
|
||||
return query_metadata
|
||||
|
||||
|
||||
def _process_alarm_actions(vnf, policy):
|
||||
# process alarm url here
|
||||
triggers = policy.entity_tpl['triggers']
|
||||
@ -262,6 +317,20 @@ def _process_alarm_actions(vnf, policy):
|
||||
return alarm_actions
|
||||
|
||||
|
||||
def _process_alarm_actions_for_reservation(vnf, policy):
|
||||
# process alarm url here
|
||||
alarm_actions = dict()
|
||||
policy_actions = policy.entity_tpl['reservation'].keys()
|
||||
policy_actions.remove('properties')
|
||||
for action in policy_actions:
|
||||
alarm_url = vnf['attributes'].get(action)
|
||||
if alarm_url:
|
||||
LOG.debug('Alarm url in heat %s', alarm_url)
|
||||
alarm_actions[action] = dict()
|
||||
alarm_actions[action]['alarm_actions'] = [alarm_url]
|
||||
return alarm_actions
|
||||
|
||||
|
||||
def get_volumes(template):
|
||||
volume_dict = dict()
|
||||
node_tpl = template['topology_template']['node_templates']
|
||||
@ -429,29 +498,37 @@ def post_process_heat_template(heat_tpl, mgmt_ports, metadata,
|
||||
else:
|
||||
heat_dict['outputs'] = output
|
||||
LOG.debug('Added output for %s', outputname)
|
||||
if metadata:
|
||||
if metadata.get('vdus'):
|
||||
for vdu_name, metadata_dict in metadata['vdus'].items():
|
||||
metadata_dict['metering.server_group'] = \
|
||||
(metadata_dict['metering.server_group'] + '-' + unique_id)[:15]
|
||||
if heat_dict['resources'].get(vdu_name):
|
||||
heat_dict['resources'][vdu_name]['properties']['metadata'] =\
|
||||
metadata_dict
|
||||
|
||||
query_metadata = alarm_resources.get('query_metadata')
|
||||
alarm_actions = alarm_resources.get('alarm_actions')
|
||||
event_types = alarm_resources.get('event_types')
|
||||
if query_metadata:
|
||||
for trigger_name, matching_metadata_dict in query_metadata.items():
|
||||
if heat_dict['resources'].get(trigger_name):
|
||||
query_mtdata = dict()
|
||||
query_mtdata['query'] = \
|
||||
query_metadata[trigger_name]
|
||||
heat_dict['resources'][trigger_name]['properties'].\
|
||||
update(query_mtdata)
|
||||
heat_dict['resources'][trigger_name][
|
||||
'properties'].update(query_mtdata)
|
||||
if alarm_actions:
|
||||
for trigger_name, alarm_actions_dict in alarm_actions.items():
|
||||
if heat_dict['resources'].get(trigger_name):
|
||||
heat_dict['resources'][trigger_name]['properties']. \
|
||||
update(alarm_actions_dict)
|
||||
|
||||
if event_types:
|
||||
for trigger_name, event_type in event_types.items():
|
||||
if heat_dict['resources'].get(trigger_name):
|
||||
heat_dict['resources'][trigger_name]['properties'].update(
|
||||
event_type)
|
||||
|
||||
add_resources_tpl(heat_dict, res_tpl)
|
||||
for res in heat_dict["resources"].values():
|
||||
if not res['type'] == HEAT_SOFTWARE_CONFIG:
|
||||
@ -496,6 +573,18 @@ def add_volume_resources(heat_dict, vol_res):
|
||||
|
||||
@log.log
|
||||
def post_process_template(template):
|
||||
def _add_scheduler_hints_property(nt):
|
||||
hints = nt.get_property_value('scheduler_hints')
|
||||
if hints is None:
|
||||
hints = OrderedDict()
|
||||
hints_schema = {'type': 'map', 'required': False,
|
||||
'entry_schema': {'type': 'string'}}
|
||||
hints_prop = properties.Property('scheduler_hints',
|
||||
hints,
|
||||
hints_schema)
|
||||
nt.get_properties_objects().append(hints_prop)
|
||||
return hints
|
||||
|
||||
for nt in template.nodetemplates:
|
||||
if (nt.type_definition.is_derived_from(MONITORING) or
|
||||
nt.type_definition.is_derived_from(FAILURE) or
|
||||
@ -530,6 +619,43 @@ def post_process_template(template):
|
||||
nt.get_properties_objects().append(newprop)
|
||||
nt.get_properties_objects().remove(p)
|
||||
|
||||
if nt.type_definition.is_derived_from(TACKERVDU):
|
||||
reservation_metadata = nt.get_property_value(
|
||||
'reservation_metadata')
|
||||
if reservation_metadata is not None:
|
||||
hints = _add_scheduler_hints_property(nt)
|
||||
|
||||
input_resource_type = reservation_metadata.get(
|
||||
'resource_type')
|
||||
input_id = reservation_metadata.get('id')
|
||||
|
||||
# Checking if 'resource_type' and 'id' is passed through a
|
||||
# input parameter file or not. If it's then get the value
|
||||
# from input parameter file.
|
||||
if (isinstance(input_resource_type, OrderedDict) and
|
||||
input_resource_type.get('get_input')):
|
||||
input_resource_type = template.parsed_params.get(
|
||||
input_resource_type.get('get_input'))
|
||||
# TODO(niraj-singh): Remove this validation once bug
|
||||
# 1815755 is fixed.
|
||||
if input_resource_type not in (
|
||||
'physical_host', 'virtual_instance'):
|
||||
raise exceptions.Invalid(
|
||||
'resoure_type must be physical_host'
|
||||
' or virtual_instance')
|
||||
|
||||
if (isinstance(input_id, OrderedDict) and
|
||||
input_id.get('get_input')):
|
||||
input_id = template.parsed_params.get(
|
||||
input_id.get('get_input'))
|
||||
|
||||
if input_resource_type == 'physical_host':
|
||||
hints['reservation'] = input_id
|
||||
elif input_resource_type == 'virtual_instance':
|
||||
hints['group'] = input_id
|
||||
nt.get_properties_objects().remove(nt.get_properties().get(
|
||||
'reservation_metadata'))
|
||||
|
||||
|
||||
@log.log
|
||||
def get_mgmt_driver(template):
|
||||
@ -712,7 +838,7 @@ def update_nested_scaling_resources(nested_resources, mgmt_ports, metadata,
|
||||
list(nested_resources.items())[0]
|
||||
nested_resources_dict =\
|
||||
yamlparser.simple_ordered_parse(nested_resources_yaml)
|
||||
if metadata:
|
||||
if metadata.get('vdus'):
|
||||
for vdu_name, metadata_dict in metadata['vdus'].items():
|
||||
if nested_resources_dict['resources'].get(vdu_name):
|
||||
nested_resources_dict['resources'][vdu_name]['properties']['metadata'] = \
|
||||
|
@ -23,6 +23,7 @@ from tacker.common import exceptions
|
||||
from tacker.common import log
|
||||
from tacker.extensions import common_services as cs
|
||||
from tacker.extensions import vnfm
|
||||
from tacker.plugins.common import constants
|
||||
from tacker.tosca import utils as toscautils
|
||||
|
||||
|
||||
@ -279,6 +280,11 @@ class TOSCAToHOT(object):
|
||||
|
||||
unique_id = uuidutils.generate_uuid()
|
||||
metadata = toscautils.get_vdu_metadata(tosca, unique_id=unique_id)
|
||||
for policy in tosca.policies:
|
||||
if policy.entity_tpl['type'] == constants.POLICY_RESERVATION:
|
||||
metadata = toscautils.get_metadata_for_reservation(
|
||||
tosca, metadata)
|
||||
break
|
||||
|
||||
alarm_resources = toscautils.pre_process_alarm_resources(
|
||||
self.vnf, tosca, metadata, unique_id=unique_id)
|
||||
|
@ -26,6 +26,7 @@ from oslo_serialization import jsonutils
|
||||
from oslo_utils import timeutils
|
||||
|
||||
from tacker.common import driver_manager
|
||||
from tacker.common import exceptions
|
||||
from tacker import context as t_context
|
||||
from tacker.plugins.common import constants
|
||||
from tacker.vnfm import utils as vnfm_utils
|
||||
@ -342,3 +343,87 @@ class VNFAlarmMonitor(object):
|
||||
def process_alarm(self, driver, vnf_dict, kwargs):
|
||||
return self._invoke(driver,
|
||||
vnf=vnf_dict, kwargs=kwargs)
|
||||
|
||||
|
||||
class VNFReservationAlarmMonitor(VNFAlarmMonitor):
|
||||
"""VNF Reservation Alarm monitor"""
|
||||
|
||||
def update_vnf_with_reservation(self, plugin, context, vnf, policy_dict):
|
||||
|
||||
alarm_url = dict()
|
||||
|
||||
def create_alarm_action(action, action_list, scaling_type):
|
||||
params = dict()
|
||||
params['vnf_id'] = vnf['id']
|
||||
params['mon_policy_name'] = action
|
||||
driver = 'ceilometer'
|
||||
|
||||
def _refactor_backend_policy(bk_policy_name, bk_action_name):
|
||||
policy = '%(policy_name)s%(action_name)s' % {
|
||||
'policy_name': bk_policy_name,
|
||||
'action_name': bk_action_name}
|
||||
return policy
|
||||
|
||||
for index, policy_action_name in enumerate(action_list):
|
||||
filters = {'name': policy_action_name}
|
||||
bkend_policies = \
|
||||
plugin.get_vnf_policies(context, vnf['id'], filters)
|
||||
if bkend_policies:
|
||||
if constants.POLICY_SCALING in str(bkend_policies[0]):
|
||||
action_list[index] = _refactor_backend_policy(
|
||||
policy_action_name, scaling_type)
|
||||
|
||||
# Support multiple action. Ex: respawn % notify
|
||||
action_name = '%'.join(action_list)
|
||||
params['mon_policy_action'] = action_name
|
||||
alarm_url[action] = \
|
||||
self.call_alarm_url(driver, vnf, params)
|
||||
details = "Alarm URL set successfully: %s" % alarm_url
|
||||
vnfm_utils.log_events(t_context.get_admin_context(), vnf,
|
||||
constants.RES_EVT_MONITOR,
|
||||
details)
|
||||
|
||||
before_end_action = policy_dict['reservation']['before_end_actions']
|
||||
end_action = policy_dict['reservation']['end_actions']
|
||||
start_action = policy_dict['reservation']['start_actions']
|
||||
|
||||
scaling_policies = \
|
||||
plugin.get_vnf_policies(
|
||||
context, vnf['id'], filters={
|
||||
'type': constants.POLICY_SCALING})
|
||||
|
||||
if len(scaling_policies) == 0:
|
||||
raise exceptions.VnfPolicyNotFound(
|
||||
policy=constants.POLICY_SCALING, vnf_id=vnf['id'])
|
||||
|
||||
for scaling_policy in scaling_policies:
|
||||
# validating start_action for scale-out policy action
|
||||
if scaling_policy['name'] not in start_action:
|
||||
raise exceptions.Invalid(
|
||||
'Not a valid template: start_action must contain'
|
||||
' %s as scaling-out action' % scaling_policy['name'])
|
||||
|
||||
# validating before_end and end_actions for scale-in policy action
|
||||
if scaling_policy['name'] not in before_end_action:
|
||||
if scaling_policy['name'] not in end_action:
|
||||
raise exceptions.Invalid(
|
||||
'Not a valid template:'
|
||||
' before_end_action or end_action'
|
||||
' should contain scaling policy: %s'
|
||||
% scaling_policy['name'])
|
||||
|
||||
for action in constants.RESERVATION_POLICY_ACTIONS:
|
||||
scaling_type = "-out" if action == 'start_actions' else "-in"
|
||||
create_alarm_action(action, policy_dict[
|
||||
'reservation'][action], scaling_type)
|
||||
|
||||
return alarm_url
|
||||
|
||||
def process_alarm_for_vnf(self, vnf, trigger):
|
||||
"""call in plugin"""
|
||||
params = trigger['params']
|
||||
alarm_dict = dict()
|
||||
alarm_dict['alarm_id'] = params['data'].get('alarm_id')
|
||||
alarm_dict['status'] = params['data'].get('current')
|
||||
driver = 'ceilometer'
|
||||
return self.process_alarm(driver, vnf, alarm_dict)
|
||||
|
@ -144,6 +144,7 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
|
||||
cfg.CONF.tacker.policy_action)
|
||||
self._vnf_monitor = monitor.VNFMonitor(self.boot_wait)
|
||||
self._vnf_alarm_monitor = monitor.VNFAlarmMonitor()
|
||||
self._vnf_reservation_monitor = monitor.VNFReservationAlarmMonitor()
|
||||
self._vnf_app_monitor = monitor.VNFAppMonitor()
|
||||
self._init_monitoring()
|
||||
|
||||
@ -253,17 +254,23 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
|
||||
def add_alarm_url_to_vnf(self, context, vnf_dict):
|
||||
vnfd_yaml = vnf_dict['vnfd']['attributes'].get('vnfd', '')
|
||||
vnfd_dict = yaml.safe_load(vnfd_yaml)
|
||||
if vnfd_dict and vnfd_dict.get('tosca_definitions_version'):
|
||||
polices = vnfd_dict['topology_template'].get('policies', [])
|
||||
for policy_dict in polices:
|
||||
name, policy = list(policy_dict.items())[0]
|
||||
if policy['type'] in constants.POLICY_ALARMING:
|
||||
alarm_url =\
|
||||
self._vnf_alarm_monitor.update_vnf_with_alarm(
|
||||
self, context, vnf_dict, policy)
|
||||
vnf_dict['attributes']['alarming_policy'] = vnf_dict['id']
|
||||
vnf_dict['attributes'].update(alarm_url)
|
||||
break
|
||||
if not (vnfd_dict and vnfd_dict.get('tosca_definitions_version')):
|
||||
return
|
||||
polices = vnfd_dict['topology_template'].get('policies', [])
|
||||
for policy_dict in polices:
|
||||
name, policy = list(policy_dict.items())[0]
|
||||
if policy['type'] in constants.POLICY_ALARMING:
|
||||
alarm_url =\
|
||||
self._vnf_alarm_monitor.update_vnf_with_alarm(
|
||||
self, context, vnf_dict, policy)
|
||||
vnf_dict['attributes']['alarming_policy'] = vnf_dict['id']
|
||||
vnf_dict['attributes'].update(alarm_url)
|
||||
elif policy['type'] in constants.POLICY_RESERVATION:
|
||||
alarm_url = \
|
||||
self._vnf_reservation_monitor.update_vnf_with_reservation(
|
||||
self, context, vnf_dict, policy)
|
||||
vnf_dict['attributes']['reservation_policy'] = vnf_dict['id']
|
||||
vnf_dict['attributes'].update(alarm_url)
|
||||
|
||||
def add_vnf_to_appmonitor(self, context, vnf_dict):
|
||||
appmonitor = self._vnf_app_monitor.create_app_dict(context, vnf_dict)
|
||||
@ -746,7 +753,8 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
|
||||
def _make_policy_dict(self, vnf, name, policy):
|
||||
p = {}
|
||||
p['type'] = policy.get('type')
|
||||
p['properties'] = policy.get('properties') or policy.get('triggers')
|
||||
p['properties'] = policy.get('properties') or policy.get(
|
||||
'triggers') or policy.get('reservation')
|
||||
p['vnf'] = vnf
|
||||
p['name'] = name
|
||||
p['id'] = uuidutils.generate_uuid()
|
||||
@ -816,8 +824,20 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
|
||||
|
||||
def _validate_alarming_policy(self, context, vnf_id, trigger):
|
||||
# validate alarm status
|
||||
if not self._vnf_alarm_monitor.process_alarm_for_vnf(vnf_id, trigger):
|
||||
raise exceptions.AlarmUrlInvalid(vnf_id=vnf_id)
|
||||
|
||||
# Trigger will contain only one action in trigger['trigger'] as it
|
||||
# filtered in _get_vnf_triggers().
|
||||
# Getting action from trigger to decide which process_alarm_for_vnf
|
||||
# method will be called.
|
||||
if trigger['trigger'].keys()[0]\
|
||||
in constants.RESERVATION_POLICY_ACTIONS:
|
||||
if not self._vnf_reservation_monitor.process_alarm_for_vnf(
|
||||
vnf_id, trigger):
|
||||
raise exceptions.AlarmUrlInvalid(vnf_id=vnf_id)
|
||||
else:
|
||||
if not self._vnf_alarm_monitor.process_alarm_for_vnf(
|
||||
vnf_id, trigger):
|
||||
raise exceptions.AlarmUrlInvalid(vnf_id=vnf_id)
|
||||
|
||||
# validate policy action. if action is composite, split it.
|
||||
# ex: respawn%notify
|
||||
@ -855,8 +875,12 @@ class VNFMPlugin(vnfm_db.VNFMPluginDb, VNFMMgmtMixin):
|
||||
# validate url
|
||||
|
||||
def _get_vnf_triggers(self, context, vnf_id, filters=None, fields=None):
|
||||
policy = self.get_vnf_policy_by_type(
|
||||
context, vnf_id, policy_type=constants.POLICY_ALARMING)
|
||||
if filters.get('name') in constants.RESERVATION_POLICY_ACTIONS:
|
||||
policy = self.get_vnf_policy_by_type(
|
||||
context, vnf_id, policy_type=constants.POLICY_RESERVATION)
|
||||
else:
|
||||
policy = self.get_vnf_policy_by_type(
|
||||
context, vnf_id, policy_type=constants.POLICY_ALARMING)
|
||||
triggers = policy['properties']
|
||||
vnf_trigger = dict()
|
||||
for trigger_name, trigger_dict in triggers.items():
|
||||
|
Loading…
Reference in New Issue
Block a user