Add system tests for nsx-t plugin from Test Plan

Change-Id: Ibb7da69a422462f30b0256d9c7c9f75761b5f658
This commit is contained in:
Vasily Gorin 2016-10-06 15:53:17 +03:00 committed by Ekaterina Khomyakova
parent 560d1f4524
commit be63a4e3f0
9 changed files with 650 additions and 425 deletions

View File

@ -33,8 +33,6 @@ Steps
* Networking: Neutron with NSX-T plugin * Networking: Neutron with NSX-T plugin
* Storage: default * Storage: default
3. Add nodes with following roles: 3. Add nodes with following roles:
* Controller
* Controller
* Controller * Controller
* Compute * Compute
4. Configure interfaces on nodes. 4. Configure interfaces on nodes.

View File

@ -2,20 +2,20 @@ System
====== ======
Setup for system tests Set up for system tests
---------------------- -----------------------
ID ID
## ##
nsxt_ha_mode nsxt_setup_system
Description Description
########### ###########
Deploy environment with 3 controlers and 1 Compute node. Nova Compute instances are running on controllers nodes. It is a config for all system tests. Deploy environment with 3 controllers and 1 Compute node. Nova Compute instances are running on controllers and compute-vmware nodes. It is a config for all system tests.
Complexity Complexity
@ -27,21 +27,21 @@ core
Steps Steps
##### #####
1. Log in to the Fuel web UI with preinstalled NSX-T plugin. 1. Log in to the Fuel web UI with pre-installed NSX-T plugin.
2. Create a new environment with following parameters: 2. Create new environment with the following parameters:
* Compute: KVM, QEMU with vCenter * Compute: KVM, QEMU with vCenter
* Networking: Neutron with NSX-T plugin * Networking: Neutron with NSX-T plugin
* Storage: default * Storage: default
* Additional services: default * Additional services: default
3. Add nodes with following roles: 3. Add nodes with following roles:
* Controller * Controller
* Controller * Compute-vmware
* Controller * Compute
* Compute * Compute
4. Configure interfaces on nodes. 4. Configure interfaces on nodes.
5. Configure network settings. 5. Configure network settings.
6. Enable and configure NSX-T plugin. 6. Enable and configure NSX-T plugin.
7. Configure VMware vCenter Settings. Add 1 vSphere cluster and configure Nova Compute instance on controllers. 7. Configure VMware vCenter Settings. Add 2 vSphere clusters, configure Nova Compute instances on controller and compute-vmware.
8. Verify networks. 8. Verify networks.
9. Deploy cluster. 9. Deploy cluster.
10. Run OSTF. 10. Run OSTF.
@ -50,294 +50,11 @@ Steps
Expected result Expected result
############### ###############
Cluster should be deployed and all OSTF test cases should be passed. Cluster should be deployed and all OSTF test cases should pass.
Check abilities to create and terminate networks on NSX. Check connectivity from VMs to public network
-------------------------------------------------------- ---------------------------------------------
ID
##
nsxt_create_terminate_networks
Description
###########
Verifies that creation of network is translated to vcenter.
Complexity
##########
core
Steps
#####
1. Setup for system tests.
2. Log in to Horizon Dashboard.
3. Add private networks net_01 and net_02.
4. Check taht networks are present in the vcenter.
5. Remove private network net_01.
6. Check that network net_01 has been removed from the vcenter.
7. Add private network net_01.
Expected result
###############
No errors.
Check abilities to bind port on NSX to VM, disable and enable this port.
------------------------------------------------------------------------
ID
##
nsxt_ability_to_bind_port
Description
###########
Verifies that system can not manipulate with port(plugin limitation).
Complexity
##########
core
Steps
#####
1. Log in to Horizon Dashboard.
2. Navigate to Project -> Compute -> Instances
3. Launch instance VM_1 with image TestVM-VMDK and flavor m1.tiny in vcenter az.
4. Launch instance VM_2 with image TestVM and flavor m1.tiny in nova az.
5. Verify that VMs should communicate between each other. Send icmp ping from VM_1 to VM_2 and vice versa.
6. Disable NSX_port of VM_1.
7. Verify that VMs should communicate between each other. Send icmp ping from VM_2 to VM_1 and vice versa.
8. Enable NSX_port of VM_1.
9. Verify that VMs should communicate between each other. Send icmp ping from VM_1 to VM_2 and vice versa.
Expected result
###############
Pings should get a response.
Check abilities to assign multiple vNIC to a single VM.
-------------------------------------------------------
ID
##
nsxt_multi_vnic
Description
###########
Check abilities to assign multiple vNICs to a single VM.
Complexity
##########
core
Steps
#####
1. Setup for system tests.
2. Log in to Horizon Dashboard.
3. Add two private networks (net01 and net02).
4. Add one subnet (net01_subnet01: 192.168.101.0/24, net02_subnet01, 192.168.101.0/24) to each network.
NOTE: We have a constraint about network interfaces. One of subnets should have gateway and another should not. So disable gateway on that subnet.
5. Launch instance VM_1 with image TestVM-VMDK and flavor m1.tiny in vcenter az.
6. Launch instance VM_2 with image TestVM and flavor m1.tiny in nova az.
7. Check abilities to assign multiple vNIC net01 and net02 to VM_1.
8. Check abilities to assign multiple vNIC net01 and net02 to VM_2.
9. Send icmp ping from VM_1 to VM_2 and vice versa.
Expected result
###############
VM_1 and VM_2 should be attached to multiple vNIC net01 and net02. Pings should get a response.
Check connectivity between VMs attached to different networks with a router between them.
-----------------------------------------------------------------------------------------
ID
##
nsxt_connectivity_diff_networks
Description
###########
Test verifies that there is a connection between networks connected through the router.
Complexity
##########
core
Steps
#####
1. Setup for system tests.
2. Log in to Horizon Dashboard.
3. Add two private networks (net01 and net02).
4. Add one subnet (net01_subnet01: 192.168.101.0/24, net02_subnet01, 192.168.101.0/24) to each network. Disable gateway for all subnets.
5. Navigate to Project -> Compute -> Instances
6. Launch instances VM_1 and VM_2 in the network 192.168.101.0/24 with image TestVM-VMDK and flavor m1.tiny in vcenter az. Attach default private net as a NIC 1.
7. Launch instances VM_3 and VM_4 in the network 192.168.101.0/24 with image TestVM and flavor m1.tiny in nova az. Attach default private net as a NIC 1.
8. Verify that VMs of same networks should communicate
between each other. Send icmp ping from VM_1 to VM_2, VM_3 to VM_4 and vice versa.
9. Verify that VMs of different networks should not communicate
between each other. Send icmp ping from VM_1 to VM_3, VM_4 to VM_2 and vice versa.
10. Create Router_01, set gateway and add interface to external network.
11. Enable gateway on subnets. Attach private networks to router.
12. Verify that VMs of different networks should communicate between each other. Send icmp ping from VM_1 to VM_3, VM_4 to VM_2 and vice versa.
13. Add new Router_02, set gateway and add interface to external network.
14. Detach net_02 from Router_01 and attach to Router_02
15. Assign floating IPs for all created VMs.
16. Verify that VMs of different networks should communicate between each other by FIPs. Send icmp ping from VM_1 to VM_3, VM_4 to VM_2 and vice versa.
Expected result
###############
Pings should get a response.
Check isolation between VMs in different tenants.
-------------------------------------------------
ID
##
nsxt_different_tenants
Description
###########
Verifies isolation in different tenants.
Complexity
##########
core
Steps
#####
1. Setup for system tests.
2. Log in to Horizon Dashboard.
3. Create non-admin tenant test_tenant.
4. Navigate to Identity -> Projects.
5. Click on Create Project.
6. Type name test_tenant.
7. On tab Project Members add admin with admin and member.
Activate test_tenant project by selecting at the top panel.
8. Navigate to Project -> Network -> Networks
9. Create network with 2 subnet.
Create Router, set gateway and add interface.
10. Navigate to Project -> Compute -> Instances
11. Launch instance VM_1
12. Activate default tenant.
13. Navigate to Project -> Network -> Networks
14. Create network with subnet.
Create Router, set gateway and add interface.
15. Navigate to Project -> Compute -> Instances
16. Launch instance VM_2.
17. Verify that VMs on different tenants should not communicate between each other. Send icmp ping from VM_1 of admin tenant to VM_2 of test_tenant and vice versa.
Expected result
###############
Pings should not get a response.
Check connectivity between VMs with same ip in different tenants.
-----------------------------------------------------------------
ID
##
nsxt_same_ip_different_tenants
Description
###########
Verifies connectivity with same IP in different tenants.
Complexity
##########
advanced
Steps
#####
1. Setup for system tests.
2. Log in to Horizon Dashboard.
3. Create 2 non-admin tenants 'test_1' and 'test_2'.
4. Navigate to Identity -> Projects.
5. Click on Create Project.
6. Type name 'test_1' of tenant.
7. Click on Create Project.
8. Type name 'test_2' of tenant.
9. On tab Project Members add admin with admin and member.
10. In tenant 'test_1' create net1 and subnet1 with CIDR 10.0.0.0/24
11. In tenant 'test_1' create security group 'SG_1' and add rule that allows ingress icmp traffic
12. In tenant 'test_2' create net2 and subnet2 with CIDR 10.0.0.0/24
13. In tenant 'test_2' create security group 'SG_2'
14. In tenant 'test_1' add VM_1 of vcenter in net1 with ip 10.0.0.4 and 'SG_1' as security group.
15. In tenant 'test_1' add VM_2 of nova in net1 with ip 10.0.0.5 and 'SG_1' as security group.
16. In tenant 'test_2' create net1 and subnet1 with CIDR 10.0.0.0/24
17. In tenant 'test_2' create security group 'SG_1' and add rule that allows ingress icmp traffic
18. In tenant 'test_2' add VM_3 of vcenter in net1 with ip 10.0.0.4 and 'SG_1' as security group.
19. In tenant 'test_2' add VM_4 of nova in net1 with ip 10.0.0.5 and 'SG_1' as security group.
20. Assign floating IPs for all created VMs.
21. Verify that VMs with same ip on different tenants should communicate between each other by FIPs. Send icmp ping from VM_1 to VM_3, VM_2 to Vm_4 and vice versa.
Expected result
###############
Pings should get a response.
Check connectivity Vms to public network.
-----------------------------------------
ID ID
@ -361,12 +78,10 @@ core
Steps Steps
##### #####
1. Setup for system tests. 1. Set up for system tests.
2. Log in to Horizon Dashboard. 2. Log in to Horizon Dashboard.
3. Create net01: net01_subnet, 192.168.111.0/24 and attach it to the router04 3. Launch two instances in default network. Instances should belong to different az (nova and vcenter).
4. Launch instance VM_1 of vcenter az with image TestVM-VMDK and flavor m1.tiny in the net_04. 4. Send ping from each instance to 8.8.8.8.
5. Launch instance VM_1 of nova az with image TestVM and flavor m1.tiny in the net_01.
6. Send ping from instances VM_1 and VM_2 to 8.8.8.8.
Expected result Expected result
@ -375,20 +90,20 @@ Expected result
Pings should get a response. Pings should get a response.
Check connectivity VMs to public network with floating ip. Check abilities to create and terminate networks on NSX
---------------------------------------------------------- -------------------------------------------------------
ID ID
## ##
nsxt_floating_ip_to_public nsxt_manage_networks
Description Description
########### ###########
Verifies that public network is available via floating ip. Check ability to create/delete networks and attach/detach it to router.
Complexity Complexity
@ -400,28 +115,172 @@ core
Steps Steps
##### #####
1. Setup for system tests. 1. Set up for system tests.
2. Log in to Horizon Dashboard 2. Log in to Horizon Dashboard.
3. Create net01: net01_subnet, 192.168.111.0/24 and attach it to the router04 3. Create two private networks net_01 and net_02.
4. Launch instance VM_1 of vcenter az with image TestVM-VMDK and flavor m1.tiny in the net_04. Associate floating ip. 4. Launch 1 instance in each network. Instances should belong to different az (nova and vcenter).
5. Launch instance VM_1 of nova az with image TestVM and flavor m1.tiny in the net_01. Associate floating ip. 5. Check that instances can't communicate with each other.
6. Send ping from instances VM_1 and VM_2 to 8.8.8.8. 6. Attach (add interface) both networks to default router.
7. Check that instances can communicate with each other via router.
8. Detach (delete interface) both networks from default router.
9. Check that instances can't communicate with each other.
10. Delete created instances.
11. Delete created networks.
Expected result Expected result
############### ###############
Pings should get a response No errors.
Check abilities to create and delete security group. Check abilities to bind port on NSX to VM, disable and enable this port
---------------------------------------------------- -----------------------------------------------------------------------
ID ID
## ##
nsxt_create_and_delete_secgroups nsxt_manage_ports
Description
###########
Verifies that system can not manipulate with port (plugin limitation).
Complexity
##########
core
Steps
#####
1. Set up for system tests.
2. Log in to Horizon Dashboard.
3. Launch two instances in default network. Instances should belong to different az (nova and vcenter).
4. Check that instances can communicate with each other.
5. Disable port attached to instance in nova az.
6. Check that instances can't communicate with each other.
7. Enable port attached to instance in nova az.
8. Check that instances can communicate with each other.
9. Disable port attached to instance in vcenter az.
10. Check that instances can't communicate with each other.
11. Enable port attached to instance in vcenter az.
12. Check that instances can communicate with each other.
13. Delete created instances.
Expected result
###############
NSX-T plugin should be able to manage admin state of ports.
Check abilities to assign multiple vNIC to a single VM
------------------------------------------------------
ID
##
nsxt_multiple_vnics
Description
###########
Check abilities to assign multiple vNICs to a single VM.
Complexity
##########
core
Steps
#####
1. Set up for system tests.
2. Log in to Horizon Dashboard.
3. Add two private networks (net01 and net02).
4. Add one subnet (net01_subnet01: 192.168.101.0/24, net02_subnet01, 192.168.101.0/24) to each network.
NOTE: We have a constraint about network interfaces. One of subnets should have gateway and another should not. So disable gateway on that subnet.
5. Launch instance VM_1 with image TestVM-VMDK and flavor m1.tiny in vcenter az.
6. Launch instance VM_2 with image TestVM and flavor m1.tiny in nova az.
7. Check abilities to assign multiple vNIC net01 and net02 to VM_1.
8. Check abilities to assign multiple vNIC net01 and net02 to VM_2.
9. Send icmp ping from VM_1 to VM_2 and vice versa.
Expected result
###############
VM_1 and VM_2 should be attached to multiple vNIC net01 and net02. Pings should get a response.
Check connectivity between VMs attached to different networks with a router between them
----------------------------------------------------------------------------------------
ID
##
nsxt_connectivity_diff_networks
Description
###########
Test verifies that there is a connection between networks connected through the router.
Complexity
##########
core
Steps
#####
1. Set up for system tests.
2. Log in to Horizon Dashboard.
3. Add two private networks (net01 and net02).
4. Add one subnet (net01_subnet01: 192.168.101.0/24, net02_subnet01, 192.168.101.0/24) to each network. Disable gateway for all subnets.
5. Launch 1 instance in each network. Instances should belong to different az (nova and vcenter).
6. Create new router (Router_01), set gateway and add interface to external network.
7. Enable gateway on subnets. Attach private networks to created router.
8. Verify that VMs of different networks should communicate between each other.
9. Add one more router (Router_02), set gateway and add interface to external network.
10. Detach net_02 from Router_01 and attach it to Router_02.
11. Assign floating IPs for all created VMs.
12. Check that default security group allow the ICMP.
13. Verify that VMs of different networks should communicate between each other by FIPs.
14. Delete instances.
15. Detach created networks from routers.
16. Delete created networks.
17. Delete created routers.
Expected result
###############
NSX-T plugin should be able to create/delete routers and assign floating ip on instances.
Check abilities to create and delete security group
---------------------------------------------------
ID
##
nsxt_manage_secgroups
Description Description
@ -430,6 +289,94 @@ Description
Verifies that creation and removing security group works fine. Verifies that creation and removing security group works fine.
Complexity
##########
core
Steps
#####
1. Set up for system tests.
2. Log in to Horizon Dashboard.
3. Create new security group with default rules.
4. Add ingress rule for ICMP protocol.
5. Launch two instances in default network. Instances should belong to different az (nova and vcenter).
6. Attach created security group to instances.
7. Check that instances can ping each other.
8. Delete ingress rule for ICMP protocol.
9. Check that instances can't ping each other.
10. Delete instances.
11. Delete security group.
Expected result
###############
NSX-T plugin should be able to create/delete security groups and add/delete rules.
Check isolation between VMs in different tenants
------------------------------------------------
ID
##
nsxt_different_tenants
Description
###########
Verifies isolation in different tenants.
Complexity
##########
core
Steps
#####
1. Set up for system tests.
2. Log in to Horizon Dashboard.
3. Create new tenant with new user.
4. Activate new project.
5. Create network with subnet.
6. Create router, set gateway and add interface.
7. Launch instance and associate floating ip with vm.
8. Activate default tenant.
9. Launch instance (use the default network) and associate floating ip with vm.
10. Check that default security group allow ingress icmp traffic.
11. Send icmp ping between instances in different tenants via floating ip.
Expected result
###############
Instances on different tenants can communicate between each other only via floating ip.
Check connectivity between VMs with same ip in different tenants
----------------------------------------------------------------
ID
##
nsxt_same_ip_different_tenants
Description
###########
Verifies connectivity with same IP in different tenants.
Complexity Complexity
########## ##########
@ -439,44 +386,38 @@ advanced
Steps Steps
##### #####
1. Setup for system tests. 1. Set up for system tests.
2. Log in to Horizon Dashboard. 2. Log in to Horizon Dashboard.
3. Launch instance VM_1 in the tenant network net_02 with image TestVM-VMDK and flavor m1.tiny in vcenter az. 3. Create 2 non-admin tenants 'test_1' and 'test_2' with common admin user.
4. Launch instance VM_2 in the tenant network net_02 with image TestVM and flavor m1.tiny in nova az. 4. Activate project 'test_1'.
5. Create security groups SG_1 to allow ICMP traffic. 5. Create network 'net1' and subnet 'subnet1' with CIDR 10.0.0.0/24
6. Add Ingress rule for ICMP protocol to SG_1 6. Create router 'router1' and attach 'net1' to it.
7. Attach SG_1 to VMs 7. Create security group 'SG_1' and add rule that allows ingress icmp traffic
8. Check ping between VM_1 and VM_2 and vice verse 8. Launch two instances (VM_1 and VM_2) in created network with created security group. Instances should belong to different az (nova and vcenter).
9. Create security groups SG_2 to allow TCP traffic 22 port. 9. Assign floating IPs for created VMs.
Add Ingress rule for TCP protocol to SG_2 10. Activate project 'test_2'.
10. Attach SG_2 to VMs. 11. Create network 'net2' and subnet 'subnet2' with CIDR 10.0.0.0/24
11. ssh from VM_1 to VM_2 and vice verse. 12. Create router 'router2' and attach 'net2' to it.
12. Delete custom rules from SG_1 and SG_2. 13. Create security group 'SG_2' and add rule that allows ingress icmp traffic
13. Check ping and ssh aren't available from VM_1 to VM_2 and vice verse. 14. Launch two instances (VM_3 and VM_4) in created network with created security group. Instances should belong to different az (nova and vcenter).
14. Add Ingress rule for ICMP protocol to SG_1. 15. Assign floating IPs for created VMs.
15. Add Ingress rule for SSH protocol to SG_2. 16. Verify that VMs with same ip on different tenants communicate between each other by FIPs. Send icmp ping from VM_1 to VM_3, VM_2 to VM_4 and vice versa.
16. Check ping between VM_1 and VM_2 and vice verse.
17. Check ssh from VM_1 to VM_2 and vice verse.
18. Attach VMs to default security group.
19. Delete security groups.
20. Check ping between VM_1 and VM_2 and vice verse.
21. Check SSH from VM_1 to VM_2 and vice verse.
Expected result Expected result
############### ###############
We should be able to send ICMP and TCP traffic between VMs in different tenants. Pings should get a response.
Verify that only the associated MAC and IP addresses can communicate on the logical port. Verify that only the associated MAC and IP addresses can communicate on the logical port
----------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------
ID ID
## ##
nsxt_associated_addresses_communication_on_port nsxt_bind_mac_ip_on_port
Description Description
@ -494,9 +435,9 @@ core
Steps Steps
##### #####
1. Setup for system tests. 1. Set up for system tests.
2. Log in to Horizon Dashboard. 2. Log in to Horizon Dashboard.
3. Launch 2 instances in each az. 3. Launch two instances in default network. Instances should belong to different az (nova and vcenter).
4. Verify that traffic can be successfully sent from and received on the MAC and IP address associated with the logical port. 4. Verify that traffic can be successfully sent from and received on the MAC and IP address associated with the logical port.
5. Configure a new IP address from the subnet not like original one on the instance associated with the logical port. 5. Configure a new IP address from the subnet not like original one on the instance associated with the logical port.
* ifconfig eth0 down * ifconfig eth0 down
@ -516,14 +457,14 @@ Expected result
Instance should not communicate with new ip and mac addresses but it should communicate with old IP. Instance should not communicate with new ip and mac addresses but it should communicate with old IP.
Check creation instance in the one group simultaneously. Check creation instance in the one group simultaneously
-------------------------------------------------------- -------------------------------------------------------
ID ID
## ##
nsxt_create_and_delete_vms nsxt_batch_instance_creation
Description Description
@ -541,7 +482,7 @@ core
Steps Steps
##### #####
1. Setup for system tests. 1. Set up for system tests.
2. Navigate to Project -> Compute -> Instances 2. Navigate to Project -> Compute -> Instances
3. Launch 5 instance VM_1 simultaneously with image TestVM-VMDK and flavor m1.tiny in vcenter az in default net_04. 3. Launch 5 instance VM_1 simultaneously with image TestVM-VMDK and flavor m1.tiny in vcenter az in default net_04.
4. All instance should be created without any error. 4. All instance should be created without any error.
@ -564,7 +505,7 @@ Verify that instances could be launched on enabled compute host
ID ID
## ##
nsxt_disable_hosts nsxt_manage_compute_hosts
Description Description
@ -582,70 +523,19 @@ core
Steps Steps
##### #####
1. Setup cluster with 3 controllers, 2 Compute nodes and cinder-vmware + 1. Set up for system tests.
compute-vmware role. 2. Disable one of compute host in each availability zone (vcenter and nova).
2. Assign instances in each az. 3. Create several instances in both az.
3. Disable one of compute host with vCenter cluster 4. Check that instances were created on enabled compute hosts.
(Admin -> Hypervisors). 5. Disable second compute host and enable first one in each availability zone (vcenter and nova).
4. Create several instances in vcenter az. 6. Create several instances in both az.
5. Check that instances were created on enabled compute host 7. Check that instances were created on enabled compute hosts.
(vcenter cluster).
6. Disable second compute host with vCenter cluster and enable
first one.
7. Create several instances in vcenter az.
8. Check that instances were created on enabled compute host
(vcenter cluster).
9. Create several instances in nova az.
10. Check that instances were created on enabled compute host
(nova cluster).
Expected result Expected result
############### ###############
All instances work fine. All instances were created on enabled compute hosts.
Check that settings about new cluster are placed in neutron config
------------------------------------------------------------------
ID
##
nsxt_smoke_add_compute
Description
###########
Adding compute-vmware role and redeploy cluster with NSX-T plugin has effect in neutron configs.
Complexity
##########
core
Steps
#####
1. Upload the NSX-T plugin to master node.
2. Create cluster and configure NSX-T for that cluster.
3. Provision three controller node.
4. Deploy cluster.
5. Get configured clusters morefid(Managed Object Reference) from neutron config.
6. Add node with compute-vmware role.
7. Redeploy cluster with new node.
8. Get new configured clusters morefid from neutron config.
9. Check new cluster added in neutron config.
Expected result
###############
Clusters are reconfigured after compute-vmware has been added.
Fuel create mirror and update core repos on cluster with NSX-T plugin Fuel create mirror and update core repos on cluster with NSX-T plugin
@ -673,7 +563,7 @@ core
Steps Steps
##### #####
1. Setup for system tests 1. Set up for system tests
2. Log into controller node via Fuel CLI and get PIDs of services which were launched by plugin and store them: 2. Log into controller node via Fuel CLI and get PIDs of services which were launched by plugin and store them:
`ps ax | grep neutron-server` `ps ax | grep neutron-server`
3. Launch the following command on the Fuel Master node: 3. Launch the following command on the Fuel Master node:
@ -722,7 +612,7 @@ Steps
1. Create cluster. 1. Create cluster.
Prepare 2 NSX managers. Prepare 2 NSX managers.
2. Configure plugin. 2. Configure plugin.
3. Set comma separtated list of NSX managers. 3. Set comma separated list of NSX managers.
nsx_api_managers = 1.2.3.4,1.2.3.5 nsx_api_managers = 1.2.3.4,1.2.3.5
4. Deploy cluster. 4. Deploy cluster.
5. Run OSTF. 5. Run OSTF.
@ -771,10 +661,10 @@ Steps
. ./openrc . ./openrc
heat stack-create -f nsxt_stack.yaml teststack heat stack-create -f nsxt_stack.yaml teststack
Wait for status COMPLETE. 4. Wait for complete creation of stack.
4. Run OSTF. 5. Check that created instance is operable.
Expected result Expected result
############### ###############
All OSTF are passed. All objects related to stack should be successfully created.

View File

@ -116,7 +116,7 @@ def check_connection_through_host(remote, ip_pair, command='pingv4',
:param ip_pair: type list, ips of instances :param ip_pair: type list, ips of instances
:param remote: access point IP :param remote: access point IP
:param command: type string, key 'pingv4', 'pingv6' or 'arping' :param command: type string, key 'pingv4', 'pingv6' or 'arping'
:param result_of_command: type integer, exit code of command execution :param result_of_command: type integer, exit code of command execution
:param timeout: wait to get expected result :param timeout: wait to get expected result
:param interval: interval of executing command :param interval: interval of executing command
""" """
@ -293,7 +293,7 @@ def check_service(ip, commands):
:param ip: ip address of node :param ip: ip address of node
:param commands: type list, nova commands to execute on controller, :param commands: type list, nova commands to execute on controller,
example of commands: example of commands:
['nova-manage service list | grep vcenter-vmcluster1' ['nova-manage service list | grep vcenter-vmcluster1']
""" """
ssh_manager = SSHManager() ssh_manager = SSHManager()
ssh_manager.check_call(ip=ip, command='source openrc') ssh_manager.check_call(ip=ip, command='source openrc')
@ -367,7 +367,7 @@ def verify_instance_state(os_conn, instances=None, expected_state='ACTIVE',
expected_state)) expected_state))
def create_access_point(os_conn, nics, security_groups): def create_access_point(os_conn, nics, security_groups, host_num=0):
"""Create access point. """Create access point.
Creating instance with floating ip as access point to instances Creating instance with floating ip as access point to instances
@ -375,10 +375,11 @@ def create_access_point(os_conn, nics, security_groups):
:param os_conn: type object, openstack :param os_conn: type object, openstack
:param nics: type dictionary, neutron networks to assign to instance :param nics: type dictionary, neutron networks to assign to instance
:param security_groups: A list of security group names :param security_groups: list of security group names
:param host_num: index of the host
""" """
# Get any available host # Get the host
host = os_conn.nova.services.list(binary='nova-compute')[0] host = os_conn.nova.services.list(binary='nova-compute')[host_num]
access_point = create_instances( # create access point server access_point = create_instances( # create access point server
os_conn=os_conn, nics=nics, os_conn=os_conn, nics=nics,

View File

@ -43,6 +43,7 @@ class CloseSSHConnectionsPlugin(Plugin):
def import_tests(): def import_tests():
from tests import test_plugin_nsxt # noqa from tests import test_plugin_nsxt # noqa
from tests import test_plugin_system # noqa
from tests import test_plugin_integration # noqa from tests import test_plugin_integration # noqa
from tests import test_plugin_scale # noqa from tests import test_plugin_scale # noqa
from tests import test_plugin_failover # noqa from tests import test_plugin_failover # noqa

View File

@ -0,0 +1,67 @@
heat_template_version: 2013-05-23
description: >
HOT template to create a new neutron network plus a router to the public
network, and for deploying servers into the new network.
parameters:
admin_floating_net:
type: string
label: admin_floating_net
description: ID or name of public network for which floating IP addresses will be allocated
default: admin_floating_net
flavor:
type: string
label: flavor
description: Flavor to use for servers
default: m1.tiny
image:
type: string
label: image
description: Image to use for servers
default: TestVM-VMDK
resources:
private_net:
type: OS::Neutron::Net
properties:
name: net_1
private_subnet:
type: OS::Neutron::Subnet
properties:
network_id: { get_resource: private_net }
cidr: 10.0.0.0/29
dns_nameservers: [ 8.8.8.8, 8.8.4.4 ]
router:
type: OS::Neutron::Router
properties:
external_gateway_info:
network: { get_param: admin_floating_net }
router_interface:
type: OS::Neutron::RouterInterface
properties:
router_id: { get_resource: router }
subnet_id: { get_resource: private_subnet }
master_image_server_port:
type: OS::Neutron::Port
properties:
network_id: { get_resource: private_net }
fixed_ips:
- subnet_id: { get_resource: private_subnet }
master_image_server:
type: OS::Nova::Server
properties:
name: instance_1
image: { get_param: image }
flavor: { get_param: flavor }
availability_zone: "vcenter"
networks:
- port: { get_resource: master_image_server_port }
outputs:
server_info:
value: { get_attr: [master_image_server, show ] }

View File

@ -36,6 +36,18 @@ class TestNSXtBase(TestBasic):
self.vcenter_az = 'vcenter' self.vcenter_az = 'vcenter'
self.vmware_image = 'TestVM-VMDK' self.vmware_image = 'TestVM-VMDK'
def get_configured_clusters(self, node_ip):
"""Get configured vcenter clusters moref id on controller.
:param node_ip: type string, ip of node
"""
cmd = r"sed -rn 's/^\s*cluster_moid\s*=\s*([^ ]+)\s*$/\1/p' " \
"/etc/neutron/plugin.ini"
clusters_id = self.ssh_manager.check_call(ip=node_ip,
cmd=cmd).stdout
return (clusters_id[-1]).rstrip().split(',')
def install_nsxt_plugin(self): def install_nsxt_plugin(self):
"""Download and install NSX-T plugin on master node. """Download and install NSX-T plugin on master node.

View File

@ -139,7 +139,6 @@ class TestNSXtSmoke(TestNSXtBase):
self.fuel_web.run_ostf(cluster_id=cluster_id, self.fuel_web.run_ostf(cluster_id=cluster_id,
test_sets=['smoke', 'sanity']) test_sets=['smoke', 'sanity'])
@test(groups=["nsxt_plugin", "nsxt_bvt_scenarios"]) @test(groups=["nsxt_plugin", "nsxt_bvt_scenarios"])
class TestNSXtBVT(TestNSXtBase): class TestNSXtBVT(TestNSXtBase):
"""NSX-t BVT scenarios""" """NSX-t BVT scenarios"""

View File

@ -44,8 +44,6 @@ class TestNSXtScale(TestNSXtBase):
* Networking: Neutron with NSX-T plugin * Networking: Neutron with NSX-T plugin
* Storage: default * Storage: default
3. Add nodes with the following roles: 3. Add nodes with the following roles:
* Controller
* Controller
* Controller * Controller
* Compute * Compute
4. Configure interfaces on nodes. 4. Configure interfaces on nodes.
@ -81,8 +79,6 @@ class TestNSXtScale(TestNSXtBase):
self.show_step(3) # Add nodes self.show_step(3) # Add nodes
self.fuel_web.update_nodes(cluster_id, self.fuel_web.update_nodes(cluster_id,
{'slave-01': ['controller'], {'slave-01': ['controller'],
'slave-02': ['controller'],
'slave-03': ['controller'],
'slave-04': ['compute']}) 'slave-04': ['compute']})
self.show_step(4) # Configure interfaces on nodes self.show_step(4) # Configure interfaces on nodes
@ -113,8 +109,9 @@ class TestNSXtScale(TestNSXtBase):
os_help.create_instance(os_conn, az='vcenter') os_help.create_instance(os_conn, az='vcenter')
self.show_step(10) # Add 2 controller nodes self.show_step(10) # Add 2 controller nodes
self.fuel_web.update_nodes(cluster_id, {'slave-05': ['controller'], self.fuel_web.update_nodes(cluster_id, {'slave-02': ['controller'],
'slave-06': ['controller']}) 'slave-03': ['controller']})
self.reconfigure_cluster_interfaces(cluster_id)
self.show_step(11) # Redeploy cluster self.show_step(11) # Redeploy cluster
self.fuel_web.deploy_cluster_wait(cluster_id) self.fuel_web.deploy_cluster_wait(cluster_id)
@ -216,6 +213,7 @@ class TestNSXtScale(TestNSXtBase):
self.show_step(9) # Add node with compute role self.show_step(9) # Add node with compute role
self.fuel_web.update_nodes(cluster_id, {'slave-05': ['compute']}) self.fuel_web.update_nodes(cluster_id, {'slave-05': ['compute']})
self.reconfigure_cluster_interfaces(cluster_id)
self.show_step(10) # Redeploy cluster self.show_step(10) # Redeploy cluster
self.fuel_web.deploy_cluster_wait(cluster_id) self.fuel_web.deploy_cluster_wait(cluster_id)
@ -326,6 +324,7 @@ class TestNSXtScale(TestNSXtBase):
self.show_step(10) # Add node with compute-vmware role self.show_step(10) # Add node with compute-vmware role
self.fuel_web.update_nodes(cluster_id, self.fuel_web.update_nodes(cluster_id,
{'slave-05': ['compute-vmware']}) {'slave-05': ['compute-vmware']})
self.reconfigure_cluster_interfaces(cluster_id)
self.show_step(11) # Reconfigure vcenter compute clusters self.show_step(11) # Reconfigure vcenter compute clusters
target_node2 = self.fuel_web.get_nailgun_node_by_name('slave-05') target_node2 = self.fuel_web.get_nailgun_node_by_name('slave-05')

View File

@ -0,0 +1,258 @@
"""Copyright 2016 Mirantis, Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
"""
from devops.error import TimeoutError
from devops.helpers.helpers import wait
from proboscis import test
from proboscis.asserts import assert_true
from fuelweb_test.helpers import os_actions
from fuelweb_test.helpers.decorators import log_snapshot_after_test
from fuelweb_test.settings import DEPLOYMENT_MODE
from fuelweb_test.settings import SERVTEST_PASSWORD
from fuelweb_test.settings import SERVTEST_TENANT
from fuelweb_test.settings import SERVTEST_USERNAME
from fuelweb_test.tests.base_test_case import SetupEnvironment
from tests.base_plugin_test import TestNSXtBase
from helpers import openstack as os_help
@test(groups=['nsxt_plugin', 'nsxt_system'])
class TestNSXtSystem(TestNSXtBase):
"""Tests from test plan that have been marked as 'Automated'."""
_tenant = None # default tenant
def _create_net(self, os_conn, name):
"""Create network in default tenant."""
if not self._tenant:
self._tenant = os_conn.get_tenant(SERVTEST_TENANT)
return os_conn.create_network(
network_name=name, tenant_id=self._tenant.id)['network']
@test(depends_on=[SetupEnvironment.prepare_slaves_5],
groups=['nsxt_setup_system'])
@log_snapshot_after_test
def nsxt_setup_system(self):
"""Set up for system tests.
Scenario:
1. Install NSX-T plugin to Fuel Master node with 5 slaves.
2. Create new environment with the following parameters:
* Compute: KVM, QEMU with vCenter
* Networking: Neutron with NSX-T plugin
* Storage: default
* Additional services: default
3. Add nodes with following roles:
* Controller
* Compute-vmware
* Compute
* Compute
4. Configure interfaces on nodes.
5. Enable and configure NSX-T plugin, configure network settings.
6. Configure VMware vCenter Settings. Add 2 vSphere clusters,
configure Nova Compute instances on controller and
compute-vmware.
7. Verify networks.
8. Deploy cluster.
9. Run OSTF.
Duration: 120 min
"""
self.show_step(1) # Install plugin to Fuel Master node with 5 slaves
self.env.revert_snapshot('ready_with_5_slaves')
self.install_nsxt_plugin()
self.show_step(2) # Create new environment with vCenter
cluster_id = self.fuel_web.create_cluster(
name=self.__class__.__name__,
mode=DEPLOYMENT_MODE,
settings=self.default.cluster_settings,
configure_ssl=False)
self.show_step(3) # Add nodes
self.fuel_web.update_nodes(cluster_id,
{'slave-01': ['controller'],
'slave-02': ['compute-vmware'],
'slave-03': ['compute'],
'slave-04': ['compute']})
self.show_step(4) # Configure interfaces on nodes
self.reconfigure_cluster_interfaces(cluster_id)
self.show_step(5) # Enable and configure plugin, configure networks
self.enable_plugin(cluster_id)
# Configure VMware settings. 2 Cluster, 1 Nova Instance on controllers
# and 1 Nova Instance on compute-vmware
self.show_step(6)
target_node2 = self.fuel_web.get_nailgun_node_by_name('slave-02')
self.fuel_web.vcenter_configure(cluster_id,
target_node_2=target_node2['hostname'],
multiclusters=True)
self.show_step(7) # Verify networks
self.fuel_web.verify_network(cluster_id)
self.show_step(8) # Deploy cluster
self.fuel_web.deploy_cluster_wait(cluster_id)
self.show_step(9) # Run OSTF
self.fuel_web.run_ostf(cluster_id)
self.env.make_snapshot("nsxt_setup_system", is_make=True)
@test(depends_on=[nsxt_setup_system],
groups=['nsxt_manage_ports'])
@log_snapshot_after_test
def nsxt_manage_ports(self):
"""Check ability to bind port on NSX to VM, disable and enable it.
Scenario:
1. Set up for system tests.
2. Get access to OpenStack.
3. Launch two instances in default network. Instances should belong
to different az (nova and vcenter).
4. Check that instances can communicate with each other.
5. Disable port attached to instance in nova az.
6. Check that instances can't communicate with each other.
7. Enable port attached to instance in nova az.
8. Check that instances can communicate with each other.
9. Disable port attached to instance in vcenter az.
10. Check that instances can't communicate with each other.
11. Enable port attached to instance in vcenter az.
12. Check that instances can communicate with each other.
13. Delete created instances.
Duration: 30 min
"""
self.show_step(1) # Set up for system tests
self.env.revert_snapshot('nsxt_setup_system')
self.show_step(2) # Get access to OpenStack
cluster_id = self.fuel_web.get_last_created_cluster()
os_conn = os_actions.OpenStackActions(
self.fuel_web.get_public_vip(cluster_id),
SERVTEST_USERNAME,
SERVTEST_PASSWORD,
SERVTEST_TENANT)
# Launch two instances in default network. Instances should belong to
# different az (nova and vcenter)
self.show_step(3)
sg = os_conn.create_sec_group_for_ssh().name
vm1 = os_help.create_instance(os_conn, sg_names=[sg])
vm2 = os_help.create_instance(os_conn, az='vcenter', sg_names=[sg])
# Check that instances can communicate with each other
self.show_step(4)
default_net = os_conn.nova.networks.find(
label=self.default.PRIVATE_NET)
vm1_fip = os_conn.assign_floating_ip(vm1)
vm2_fip = os_conn.assign_floating_ip(vm2)
vm1_ip = os_conn.get_nova_instance_ip(vm1, net_name=default_net)
vm2_ip = os_conn.get_nova_instance_ip(vm2, net_name=default_net)
os_help.check_connection_vms({vm1_fip: [vm2_ip], vm2_fip: [vm1_ip]})
self.show_step(5) # Disable port attached to instance in nova az
port = os_conn.neutron.list_ports(device_id=vm1.id)['ports'][0]['id']
os_conn.neutron.update_port(port, {'port': {'admin_state_up': False}})
# Check that instances can't communicate with each other
self.show_step(6)
os_help.check_connection_vms({vm2_fip: [vm1_ip]}, result_of_command=1)
self.show_step(7) # Enable port attached to instance in nova az
os_conn.neutron.update_port(port, {'port': {'admin_state_up': True}})
# Check that instances can communicate with each other
self.show_step(8)
os_help.check_connection_vms({vm1_fip: [vm2_ip], vm2_fip: [vm1_ip]})
self.show_step(9) # Disable port attached to instance in vcenter az
port = os_conn.neutron.list_ports(device_id=vm2.id)['ports'][0]['id']
os_conn.neutron.update_port(port, {'port': {'admin_state_up': False}})
# Check that instances can't communicate with each other
self.show_step(10)
os_help.check_connection_vms({vm1_fip: [vm2_ip]}, result_of_command=1)
self.show_step(11) # Enable port attached to instance in vcenter az
os_conn.neutron.update_port(port, {'port': {'admin_state_up': True}})
# Check that instances can communicate with each other
self.show_step(12)
os_help.check_connection_vms({vm1_fip: [vm2_ip], vm2_fip: [vm1_ip]})
self.show_step(13) # Delete created instances
vm1.delete()
vm2.delete()
@test(depends_on=[nsxt_setup_system],
groups=['nsxt_hot'])
@log_snapshot_after_test
def nsxt_hot(self):
"""Deploy HOT.
Scenario:
1. Deploy cluster with NSX-t.
2. On controller node create teststack with nsxt_stack.yaml.
3. Wait for status COMPLETE.
4. Run OSTF.
Duration: 30 min
"""
template_path = 'plugin_test/test_templates/nsxt_stack.yaml'
self.show_step(1) # Deploy cluster with NSX-t
self.env.revert_snapshot("nsxt_setup_system")
# # On controller node create teststack with nsxt_stack.yaml
self.show_step(2)
cluster_id = self.fuel_web.get_last_created_cluster()
os_conn = os_actions.OpenStackActions(
self.fuel_web.get_public_vip(cluster_id),
SERVTEST_USERNAME,
SERVTEST_PASSWORD,
SERVTEST_TENANT)
with open(template_path) as f:
template = f.read()
stack_id = os_conn.heat.stacks.create(
stack_name='nsxt_stack',
template=template,
disable_rollback=True
)['stack']['id']
self.show_step(3) # Wait for status COMPLETE
expect_state = 'CREATE_COMPLETE'
try:
wait(lambda:
os_conn.heat.stacks.get(stack_id).stack_status ==
expect_state, timeout=60 * 5)
except TimeoutError:
current_state = os_conn.heat.stacks.get(stack_id).stack_status
assert_true(current_state == expect_state,
'Timeout is reached. Current state of stack '
'is {}'.format(current_state))
self.show_step(4) # Run OSTF
self.fuel_web.run_ostf(cluster_id)