Refactor code of vmware dvs tests

* do code more readable
* fix most of misprints in docs

Change-Id: I56637bd15b9491cd891e1be95f48c4cd61f372ca
This commit is contained in:
ekhomyakova 2016-08-26 18:24:25 +03:00
parent 2eb83d06bd
commit edbe780e60
9 changed files with 1030 additions and 1153 deletions

View File

@ -64,7 +64,7 @@ Steps
##### #####
1. Install DVS plugin on master node. 1. Install DVS plugin on master node.
2. Create a new environment with following parameters: 2. Create a new environment with the following parameters:
* Compute: KVM/QEMU with vCenter * Compute: KVM/QEMU with vCenter
* Networking: Neutron with VLAN segmentation * Networking: Neutron with VLAN segmentation
* Storage: default * Storage: default
@ -81,14 +81,12 @@ Steps
7. Configure VMware vCenter Settings. Add 2 vSphere clusters and configure Nova Compute instances on controllers. 7. Configure VMware vCenter Settings. Add 2 vSphere clusters and configure Nova Compute instances on controllers.
8. Verify networks. 8. Verify networks.
9. Deploy cluster. 9. Deploy cluster.
10. Run OSTF 10. Run OSTF.
11. Launch instances in nova and vcenter availability zones. 11. Launch instances in nova and vcenter availability zones.
12. Verify connection between instances. Send ping. 12. Verify connection between instances: check that instances can ping each other.
Check that ping get reply.
13. Shutdown controller with vmclusters. 13. Shutdown controller with vmclusters.
14. Check that vcenter-vmcluster migrates to another controller. 14. Check that vcenter-vmcluster migrates to another controller.
15. Verify connection between instances. 15. Verify connection between instances: check that instances can ping each other.
Send ping, check that ping get reply.
Expected result Expected result
@ -123,7 +121,7 @@ Steps
##### #####
1. Install DVS plugin on master node. 1. Install DVS plugin on master node.
2. Create a new environment with following parameters: 2. Create a new environment with the following parameters:
* Compute: KVM/QEMU with vCenter * Compute: KVM/QEMU with vCenter
* Networking: Neutron with VLAN segmentation * Networking: Neutron with VLAN segmentation
* Storage: default * Storage: default
@ -141,9 +139,9 @@ Steps
8. Verify networks. 8. Verify networks.
9. Deploy cluster. 9. Deploy cluster.
10. Run OSTF. 10. Run OSTF.
11. Launch instance VM_1 with image TestVM, availability zone nova and flavor m1.micro. 11. Launch instance VM_1 from image TestVM, with availability zone nova and flavor m1.micro.
12. Launch instance VM_2 with image TestVM-VMDK, availability zone vcenter and flavor m1.micro. 12. Launch instance VM_2 from image TestVM-VMDK, with availability zone vcenter and flavor m1.micro.
13. Check connection between instances, send ping from VM_1 to VM_2 and vice verse. 13. Verify connection between instances: check that VM_1 and VM_2 can ping each other.
14. Reboot vcenter. 14. Reboot vcenter.
15. Check that controller lost connection with vCenter. 15. Check that controller lost connection with vCenter.
16. Wait for vCenter. 16. Wait for vCenter.
@ -202,10 +200,10 @@ Steps
8. Verify networks. 8. Verify networks.
9. Deploy cluster. 9. Deploy cluster.
10. Run OSTF. 10. Run OSTF.
11. Launch instance VM_1 with image TestVM, nova availability zone and flavor m1.micro. 11. Launch instance VM_1 with image TestVM, nova availability zone and flavor m1.micro.
12. Launch instance VM_2 with image TestVM-VMDK, vcenter availability zone and flavor m1.micro. 12. Launch instance VM_2 with image TestVM-VMDK, vcenter availability zone and flavor m1.micro.
13. Check connection between instances, send ping from VM_1 to VM_2 and vice verse. 13. Verify connection between instances: check that VM_1 and VM_2 can ping each other.
14. Reboot vcenter. 14. Reboot vCenter.
15. Check that ComputeVMware lost connection with vCenter. 15. Check that ComputeVMware lost connection with vCenter.
16. Wait for vCenter. 16. Wait for vCenter.
17. Ensure connectivity between instances. 17. Ensure connectivity between instances.
@ -261,14 +259,12 @@ Steps
7. Configure VMware vCenter Settings. Add 2 vSphere clusters and configure Nova Compute instances on controllers. 7. Configure VMware vCenter Settings. Add 2 vSphere clusters and configure Nova Compute instances on controllers.
8. Verify networks. 8. Verify networks.
9. Deploy cluster. 9. Deploy cluster.
10. Run OSTF 10. Run OSTF.
11. Launch instances in nova and vcenter availability zones. 11. Launch instances in nova and vcenter availability zones.
12. Verify connection between instances. Send ping. 12. Verify connection between instances: check that instances can ping each other.
Check that ping get reply. 13. Reset controller with vmclusters services.
13. Reset controller with vmclusters services.
14. Check that vmclusters services migrate to another controller. 14. Check that vmclusters services migrate to another controller.
15. Verify connection between instances. 15. Verify connection between instances: check that instances can ping each other.
Send ping, check that ping get reply.
Expected result Expected result

View File

@ -105,15 +105,15 @@ Steps
* Storage: default * Storage: default
* Additional services: default * Additional services: default
3. Go to Network tab -> Other subtab and check DVS plugin section is displayed with all required GUI elements: 3. Go to Network tab -> Other subtab and check DVS plugin section is displayed with all required GUI elements:
'Neutron VMware DVS ML2 plugin' check box 'Neutron VMware DVS ML2 plugin' checkbox
"Use the VMware DVS firewall driver" check box 'Use the VMware DVS firewall driver' checkbox
"Enter the cluster to dvSwitch mapping." text field with description 'List of ClusterName:SwitchName pairs, separated by semicolon. ' 'Enter the cluster to dvSwitch mapping.' text field with description 'List of ClusterName:SwitchName pairs, separated by semicolon.'
'Versions' radio button with <plugin version> 'Versions' radio button with <plugin version>
4. Verify that check box "Neutron VMware DVS ML2 plugin" is enabled by default. 4. Verify that checkbox 'Neutron VMware DVS ML2 plugin' is enabled by default.
5. Verify that user can disable -> enable the DVS plugin by clicking on the checkbox “Neutron VMware DVS ML2 plugin” 5. Verify that user can disable/enable the DVS plugin by clicking on the checkbox 'Neutron VMware DVS ML2 plugin'.
6. Verify that check box "Use the VMware DVS firewall driver" is enabled by default. 6. Verify that checkbox 'Use the VMware DVS firewall driver' is enabled by default.
7. Verify that all labels of the DVS plugin section have the same font style and color. 7. Verify that all labels of the DVS plugin section have the same font style and color.
8. Verify that all elements of the DVS plugin section are vertically aligned 8. Verify that all elements of the DVS plugin section are vertically aligned.
Expected result Expected result
@ -184,7 +184,7 @@ dvs_vcenter_bvt
Description Description
########### ###########
Check deployment with VMware DVS plugin, 3 Controllers, Compute, 2 CephOSD, CinderVMware and computeVMware roles. Check deployment with VMware DVS plugin, 3 Controllers, 3 Compute + CephOSD and CinderVMware + computeVMware roles.
Complexity Complexity

View File

@ -15,7 +15,7 @@ dvs_vcenter_systest_setup
Description Description
########### ###########
Deploy environment in DualHypervisors mode with 3 controllers, 2 compute-vmware and 1 compute nodes. Nova Compute instances are running on controller nodes. Deploy environment in DualHypervisors mode with 1 controller, 1 compute-vmware and 2 compute nodes. Nova Compute instances are running on controller nodes.
Complexity Complexity
@ -86,7 +86,7 @@ Steps
5. Remove private network net_01. 5. Remove private network net_01.
6. Check that network net_01 is not present in the vSphere. 6. Check that network net_01 is not present in the vSphere.
7. Add private network net_01. 7. Add private network net_01.
8. Check that networks is present in the vSphere. 8. Check that networks is present in the vSphere.
Expected result Expected result
@ -160,14 +160,14 @@ Steps
1. Set up for system tests. 1. Set up for system tests.
2. Log in to Horizon Dashboard. 2. Log in to Horizon Dashboard.
3. Navigate to Project -> Compute -> Instances 3. Navigate to Project -> Compute -> Instances
4. Launch instance VM_1 with image TestVM, availability zone nova and flavor m1.micro. 4. Launch instance VM_1 with image TestVM, availability zone nova and flavor m1.micro.
5. Launch instance VM_2 with image TestVM-VMDK, availability zone vcenter and flavor m1.micro. 5. Launch instance VM_2 with image TestVM-VMDK, availability zone vcenter and flavor m1.micro.
6. Verify that instances communicate between each other. Send icmp ping from VM_1 to VM_2 and vice versa. 6. Verify that instances communicate between each other: check that VM_1 and VM_2 can ping each other.
7. Disable interface of VM_1. 7. Disable interface of VM_1.
8. Verify that instances don't communicate between each other. Send icmp ping from VM_2 to VM_1 and vice versa. 8. Verify that instances don't communicate between each other: check that VM_1 and VM_2 can not ping each other.
9. Enable interface of VM_1. 9. Enable interface of VM_1.
10. Verify that instances communicate between each other. Send icmp ping from VM_1 to VM_2 and vice versa. 10. Verify that instances communicate between each other: check that VM_1 and VM_2 can ping each other.
Expected result Expected result
@ -204,13 +204,13 @@ Steps
1. Set up for system tests. 1. Set up for system tests.
2. Log in to Horizon Dashboard. 2. Log in to Horizon Dashboard.
3. Add two private networks (net01, and net02). 3. Add two private networks (net01, and net02).
4. Add one subnet (net01_subnet01: 192.168.101.0/24, net02_subnet01, 192.168.102.0/24) to each network. 4. Add one subnet (net01_subnet01: 192.168.101.0/24, net02_subnet01, 192.168.102.0/24) to each network.
5. Launch instance VM_1 with image TestVM and flavor m1.micro in nova availability zone. 5. Launch instance VM_1 with image TestVM and flavor m1.micro in nova availability zone.
6. Launch instance VM_2 with image TestVM-VMDK and flavor m1.micro vcenter availability zone. 6. Launch instance VM_2 with image TestVM-VMDK and flavor m1.micro vcenter availability zone.
7. Check abilities to assign multiple vNIC net01 and net02 to VM_1. 7. Check abilities to assign multiple vNIC net01 and net02 to VM_1.
8. Check abilities to assign multiple vNIC net01 and net02 to VM_2. 8. Check abilities to assign multiple vNIC net01 and net02 to VM_2.
9. Check that both interfaces on each instance have an IP address. To activate second interface on cirros edit the /etc/network/interfaces and restart network: "sudo /etc/init.d/S40network restart" 9. Check that both interfaces on each instance have an IP address. To activate second interface on cirros edit the /etc/network/interfaces and restart network: "sudo /etc/init.d/S40network restart"
10. Send icmp ping from VM_1 to VM_2 and vice versa. 10. check that VM_1 and VM_2 can ping each other.
Expected result Expected result
@ -219,8 +219,8 @@ Expected result
VM_1 and VM_2 should be attached to multiple vNIC net01 and net02. Pings should get a response. VM_1 and VM_2 should be attached to multiple vNIC net01 and net02. Pings should get a response.
Check connection between instances in one default tenant. Check connection between instances in one default tenant.
---------------------------------------------------------- ---------------------------------------------------------
ID ID
@ -245,10 +245,10 @@ Steps
##### #####
1. Set up for system tests. 1. Set up for system tests.
2. Navigate to Project -> Compute -> Instances 2. Navigate to Project -> Compute -> Instances.
3. Launch instance VM_1 with image TestVM and flavor m1.micro in nova availability zone. 3. Launch instance VM_1 with image TestVM and flavor m1.micro in nova availability zone.
4. Launch instance VM_2 with image TestVM-VMDK and flavor m1.micro in vcenter availability zone. 4. Launch instance VM_2 with image TestVM-VMDK and flavor m1.micro in vcenter availability zone.
5. Verify that VM_1 and VM_2 on different hypervisors communicate between each other. Send icmp ping from VM_1 of vCenter to VM_2 from Qemu/KVM and vice versa. 5. Verify that VM_1 and VM_2 on different hypervisors communicate between each other: check that instances can ping each other.
Expected result Expected result
@ -285,10 +285,10 @@ Steps
1. Set up for system tests. 1. Set up for system tests.
2. Log in to Horizon Dashboard. 2. Log in to Horizon Dashboard.
3. Create tenant net_01 with subnet. 3. Create tenant net_01 with subnet.
4. Navigate to Project -> Compute -> Instances 4. Navigate to Project -> Compute -> Instances.
5. Launch instance VM_1 with image TestVM and flavor m1.micro in nova availability zone in net_01 5. Launch instance VM_1 with image TestVM and flavor m1.micro in nova availability zone in net_01
6. Launch instance VM_2 with image TestVM-VMDK and flavor m1.micro in vcenter availability zone in net_01 6. Launch instance VM_2 with image TestVM-VMDK and flavor m1.micro in vcenter availability zone in net_01
7. Verify that instances on same tenants communicate between each other. Send icmp ping from VM_1 to VM_2 and vice versa. 7. Verify that instances on same tenants communicate between each other. check that VM_1 and VM_2 can ping each other.
Expected result Expected result
@ -297,7 +297,7 @@ Expected result
Pings should get a response. Pings should get a response.
Check connectivity between instances attached to different networks with and within a router between them. Check connectivity between instances attached to different networks with and without a router between them.
---------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------
@ -310,7 +310,7 @@ dvs_different_networks
Description Description
########### ###########
Check connectivity between instances attached to different networks with and within a router between them. Check connectivity between instances attached to different networks with and without a router between them.
Complexity Complexity
@ -333,11 +333,11 @@ Steps
9. Launch instances in the net02 with image TestVM and flavor m1.micro in nova az. 9. Launch instances in the net02 with image TestVM and flavor m1.micro in nova az.
10. Launch instances in the net02 with image TestVM-VMDK and flavor m1.micro in vcenter az. 10. Launch instances in the net02 with image TestVM-VMDK and flavor m1.micro in vcenter az.
11. Verify that instances of same networks communicate between each other via private ip. 11. Verify that instances of same networks communicate between each other via private ip.
Send icmp ping between instances. Check that instances can ping each other.
12. Verify that instances of different networks don't communicate between each other via private ip. 12. Verify that instances of different networks don't communicate between each other via private ip.
13. Delete net_02 from Router_02 and add it to the Router_01. 13. Delete net_02 from Router_02 and add it to the Router_01.
14. Verify that instances of different networks communicate between each other via private ip. 14. Verify that instances of different networks communicate between each other via private ip.
Send icmp ping between instances. Check that instances can ping each other.
Expected result Expected result
@ -375,15 +375,15 @@ Steps
2. Log in to Horizon Dashboard. 2. Log in to Horizon Dashboard.
3. Create non-admin tenant with name 'test_tenant': Identity -> Projects-> Create Project. On tab Project Members add admin with admin and member. 3. Create non-admin tenant with name 'test_tenant': Identity -> Projects-> Create Project. On tab Project Members add admin with admin and member.
4. Navigate to Project -> Network -> Networks 4. Navigate to Project -> Network -> Networks
5. Create network with subnet. 5. Create network with subnet.
6. Navigate to Project -> Compute -> Instances 6. Navigate to Project -> Compute -> Instances
7. Launch instance VM_1 with image TestVM-VMDK in the vcenter availability zone. 7. Launch instance VM_1 with image TestVM-VMDK in the vcenter availability zone.
8. Navigate to test_tenant. 8. Navigate to test_tenant.
9. Navigate to Project -> Network -> Networks 9. Navigate to Project -> Network -> Networks.
10. Create Router, set gateway and add interface. 10. Create Router, set gateway and add interface.
11. Navigate to Project -> Compute -> Instances 11. Navigate to Project -> Compute -> Instances
12. Launch instance VM_2 with image TestVM-VMDK in the vcenter availability zone. 12. Launch instance VM_2 with image TestVM-VMDK in the vcenter availability zone.
13. Verify that instances on different tenants don't communicate between each other. Send icmp ping from VM_1 of admin tenant to VM_2 of test_tenant and vice versa. 13. Verify that instances on different tenants don't communicate between each other. Check that instances can not ping each other.
Expected result Expected result
@ -421,14 +421,14 @@ Steps
2. Log in to Horizon Dashboard. 2. Log in to Horizon Dashboard.
3. Create net_01: net01_subnet, 192.168.112.0/24 and attach it to default router. 3. Create net_01: net01_subnet, 192.168.112.0/24 and attach it to default router.
4. Launch instance VM_1 of nova availability zone with image TestVM and flavor m1.micro in the default internal network. 4. Launch instance VM_1 of nova availability zone with image TestVM and flavor m1.micro in the default internal network.
5. Launch instance VM_2 of vcenter availability zone with image TestVM-VMDK and flavor m1.micro in the net_01. 5. Launch instance VM_2 of vcenter availability zone with image TestVM-VMDK and flavor m1.micro in the net_01.
6. Send ping from instances VM_1 and VM_2 to 8.8.8.8 or other outside ip. 6. Send icmp request from instances VM_1 and VM_2 to 8.8.8.8 or other outside ip and get related icmp reply.
Expected result Expected result
############### ###############
Pings should get a response Pings should get a response
Check connectivity instances to public network with floating ip. Check connectivity instances to public network with floating ip.
@ -460,8 +460,8 @@ Steps
2. Log in to Horizon Dashboard. 2. Log in to Horizon Dashboard.
3. Create net01: net01__subnet, 192.168.112.0/24 and attach it to the default router. 3. Create net01: net01__subnet, 192.168.112.0/24 and attach it to the default router.
4. Launch instance VM_1 of nova availability zone with image TestVM and flavor m1.micro in the default internal network. Associate floating ip. 4. Launch instance VM_1 of nova availability zone with image TestVM and flavor m1.micro in the default internal network. Associate floating ip.
5. Launch instance VM_2 of vcenter availability zone with image TestVM-VMDK and flavor m1.micro in the net_01. Associate floating ip. 5. Launch instance VM_2 of vcenter availability zone with image TestVM-VMDK and flavor m1.micro in the net_01. Associate floating ip.
6. Send ping from instances VM_1 and VM_2 to 8.8.8.8 or other outside ip. 6. Send icmp request from instances VM_1 and VM_2 to 8.8.8.8 or other outside ip and get related icmp reply.
Expected result Expected result
@ -497,29 +497,29 @@ Steps
1. Set up for system tests. 1. Set up for system tests.
2. Create non default network with subnet net_01. 2. Create non default network with subnet net_01.
3. Launch 2 instances of vcenter availability zone and 2 instances of nova availability zone in the tenant network net_01 3. Launch 2 instances of vcenter availability zone and 2 instances of nova availability zone in the tenant network net_01
4. Launch 2 instances of vcenter availability zone and 2 instances of nova availability zone in the internal tenant network. 4. Launch 2 instances of vcenter availability zone and 2 instances of nova availability zone in the internal tenant network.
5. Attach net_01 to default router. 5. Attach net_01 to default router.
6. Create security group SG_1 to allow ICMP traffic. 6. Create security group SG_1 to allow ICMP traffic.
7. Add Ingress rule for ICMP protocol to SG_1. 7. Add Ingress rule for ICMP protocol to SG_1.
8. Create security groups SG_2 to allow TCP traffic 22 port. 8. Create security groups SG_2 to allow TCP traffic 22 port.
9. Add Ingress rule for TCP protocol to SG_2. 9. Add Ingress rule for TCP protocol to SG_2.
10. Remove default security group and attach SG_1 and SG_2 to VMs 10. Remove default security group and attach SG_1 and SG_2 to VMs.
11. Check ping is available between instances. 11. Check that instances can ping each other.
12. Check ssh connection is available between instances. 12. Check ssh connection is available between instances.
13. Delete all rules from SG_1 and SG_2. 13. Delete all rules from SG_1 and SG_2.
14. Check that ssh aren't available to instances. 14. Check that instances are not available via ssh.
15. Add Ingress and egress rules for TCP protocol to SG_2. 15. Add Ingress and egress rules for TCP protocol to SG_2.
16. Check ssh connection is available between instances. 16. Check ssh connection is available between instances.
17. Check ping is not available between instances. 17. Check that instances can not ping each other.
18. Add Ingress and egress rules for ICMP protocol to SG_1. 18. Add Ingress and egress rules for ICMP protocol to SG_1.
19. Check ping is available between instances. 19. Check that instances can ping each other.
20. Delete Ingress rule for ICMP protocol from SG_1 (for OS cirros skip this step). 20. Delete Ingress rule for ICMP protocol from SG_1 (for OS cirros skip this step).
21. Add Ingress rule for ICMP ipv6 to SG_1 (for OS cirros skip this step). 21. Add Ingress rule for ICMP ipv6 to SG_1 (for OS cirros skip this step).
22. Check ping6 is available between instances. (for OS cirros skip this step). 22. Check ping6 is available between instances. (for OS cirros skip this step).
23. Delete SG1 and SG2 security groups. 23. Delete SG1 and SG2 security groups.
24. Attach instances to default security group. 24. Attach instances to default security group.
25. Check ping is available between instances. 25. Check that instances can ping each other.
26. Check ssh is available between instances. 26. Check ssh is available between instances.
@ -556,7 +556,7 @@ Steps
1. Set up for system tests. 1. Set up for system tests.
2. Log in to Horizon Dashboard. 2. Log in to Horizon Dashboard.
3. Launch 2 instances on each hypervisors. 3. Launch 2 instances on each hypervisor (one in vcenter az and another one in nova az).
4. Verify that traffic can be successfully sent from and received on the MAC and IP address associated with the logical port. 4. Verify that traffic can be successfully sent from and received on the MAC and IP address associated with the logical port.
5. Configure a new IP address on the instance associated with the logical port. 5. Configure a new IP address on the instance associated with the logical port.
6. Confirm that the instance cannot communicate with that IP address. 6. Confirm that the instance cannot communicate with that IP address.
@ -616,7 +616,7 @@ Steps
* network: net1 with ip 10.0.0.5 * network: net1 with ip 10.0.0.5
* SG: SG_1 * SG: SG_1
10. In tenant 'test_2' create net2 and subnet2 with CIDR 10.0.0.0/24. 10. In tenant 'test_2' create net2 and subnet2 with CIDR 10.0.0.0/24.
11. In tenant 'test_2' Create Router 'router_2' with external floating network 11. In tenant 'test_2' create Router 'router_2' with external floating network
12. In tenant 'test_2' attach interface of net2, subnet2 to router_2 12. In tenant 'test_2' attach interface of net2, subnet2 to router_2
13. In tenant "test_2" create security group "SG_2" and add rule that allows ingress icmp traffic. 13. In tenant "test_2" create security group "SG_2" and add rule that allows ingress icmp traffic.
14. In tenant "test_2" launch instance: 14. In tenant "test_2" launch instance:
@ -633,16 +633,16 @@ Steps
* flavor: m1.micro * flavor: m1.micro
* network: net2 with ip 10.0.0.5 * network: net2 with ip 10.0.0.5
* SG: SG_2 * SG: SG_2
16. Assign floating ips for each instance 16. Assign floating ips for each instance.
17. Check instances in tenant_1 communicate between each other by internal ip 17. Check instances in tenant_1 communicate between each other by internal ip.
18. Check instances in tenant_2 communicate between each other by internal ip 18. Check instances in tenant_2 communicate between each other by internal ip.
19. Check instances in different tenants communicate between each other by floating ip. 19. Check instances in different tenants communicate between each other by floating ip.
Expected result Expected result
############### ###############
Pings should get a response. Pings should get a response.
Check creation instance in the one group simultaneously. Check creation instance in the one group simultaneously.
@ -671,11 +671,11 @@ Steps
##### #####
1. Set up for system tests. 1. Set up for system tests.
2. Navigate to Project -> Compute -> Instances 2. Navigate to Project -> Compute -> Instances.
3. Launch few instances simultaneously with image TestVM and flavor m1.micro in nova availability zone in default internal network. 3. Launch few instances simultaneously with image TestVM and flavor m1.micro in nova availability zone in default internal network.
4. Launch few instances simultaneously with image TestVM-VMDK and flavor m1.micro in vcenter availability zone in default internal network. 4. Launch few instances simultaneously with image TestVM-VMDK and flavor m1.micro in vcenter availability zone in default internal network.
5. Check connection between instances (ping, ssh). 5. Check connection between instances (ping, ssh).
6. Delete all instances from horizon simultaneously. 6. Delete all instances from Horizon simultaneously.
Expected result Expected result
@ -726,7 +726,7 @@ Steps
7. Configure VMware vCenter Settings. Add 1 vSphere clusters and configure Nova Compute instances on controllers. 7. Configure VMware vCenter Settings. Add 1 vSphere clusters and configure Nova Compute instances on controllers.
8. Verify networks. 8. Verify networks.
9. Deploy cluster. 9. Deploy cluster.
10. Create instances for each of hypervisor's type 10. Create instances for each of hypervisor's type
11. Create 2 volumes each in his own availability zone. 11. Create 2 volumes each in his own availability zone.
12. Attach each volume to his instance. 12. Attach each volume to his instance.
@ -800,17 +800,19 @@ Steps
1. Upload plugins to the master node. 1. Upload plugins to the master node.
2. Install plugin. 2. Install plugin.
3. Create cluster with vcenter. 3. Create cluster with vcenter.
4. Set CephOSD as backend for Glance and Cinder 4. Set CephOSD as backend for Glance and Cinder.
5. Add nodes with following roles: 5. Add nodes with following roles:
controller * Controller
compute-vmware * Compute-VMware
compute-vmware * Compute-VMware
compute * Compute
3 ceph-osd * Ceph-OSD
* Ceph-OSD
* Ceph-OSD
6. Upload network template. 6. Upload network template.
7. Check network configuration. 7. Check network configuration.
8. Deploy the cluster 8. Deploy the cluster.
9. Run OSTF 9. Run OSTF.
Expected result Expected result
@ -832,8 +834,7 @@ dvs_vcenter_remote_sg
Description Description
########### ###########
Verify that network traffic is allowed/prohibited to instances according security groups Verify that network traffic is allowed/prohibited to instances according security groups rules.
rules.
Complexity Complexity
@ -859,41 +860,41 @@ Steps
SG_man SG_man
SG_DNS SG_DNS
6. Add rules to SG_web: 6. Add rules to SG_web:
Ingress rule with ip protocol 'http' , port range 80-80, ip range 0.0.0.0/0 Ingress rule with ip protocol 'http', port range 80-80, ip range 0.0.0.0/0
Ingress rule with ip protocol 'tcp' , port range 3306-3306, SG group 'SG_db' Ingress rule with ip protocol 'tcp', port range 3306-3306, SG group 'SG_db'
Ingress rule with ip protocol 'tcp' , port range 22-22, SG group 'SG_man Ingress rule with ip protocol 'tcp', port range 22-22, SG group 'SG_man
Engress rule with ip protocol 'http' , port range 80-80, ip range 0.0.0.0/0 Egress rule with ip protocol 'http', port range 80-80, ip range 0.0.0.0/0
Egress rule with ip protocol 'tcp' , port range 3306-3306, SG group 'SG_db' Egress rule with ip protocol 'tcp', port range 3306-3306, SG group 'SG_db'
Egress rule with ip protocol 'tcp' , port range 22-22, SG group 'SG_man Egress rule with ip protocol 'tcp', port range 22-22, SG group 'SG_man
7. Add rules to SG_db: 7. Add rules to SG_db:
Egress rule with ip protocol 'http' , port range 80-80, ip range 0.0.0.0/0 Egress rule with ip protocol 'http', port range 80-80, ip range 0.0.0.0/0
Egress rule with ip protocol 'https ' , port range 443-443, ip range 0.0.0.0/0 Egress rule with ip protocol 'https', port range 443-443, ip range 0.0.0.0/0
Ingress rule with ip protocol 'http' , port range 80-80, ip range 0.0.0.0/0 Ingress rule with ip protocol 'http', port range 80-80, ip range 0.0.0.0/0
Ingress rule with ip protocol 'https ' , port range 443-443, ip range 0.0.0.0/0 Ingress rule with ip protocol 'https', port range 443-443, ip range 0.0.0.0/0
Ingress rule with ip protocol 'tcp' , port range 3306-3306, SG group 'SG_web' Ingress rule with ip protocol 'tcp', port range 3306-3306, SG group 'SG_web'
Ingress rule with ip protocol 'tcp' , port range 22-22, SG group 'SG_man' Ingress rule with ip protocol 'tcp', port range 22-22, SG group 'SG_man'
Egress rule with ip protocol 'tcp' , port range 3306-3306, SG group 'SG_web' Egress rule with ip protocol 'tcp', port range 3306-3306, SG group 'SG_web'
Egress rule with ip protocol 'tcp' , port range 22-22, SG group 'SG_man' Egress rule with ip protocol 'tcp', port range 22-22, SG group 'SG_man'
8. Add rules to SG_DNS: 8. Add rules to SG_DNS:
Ingress rule with ip protocol 'udp ' , port range 53-53, ip-prefix 'ip DNS server' Ingress rule with ip protocol 'udp', port range 53-53, ip-prefix 'ip DNS server'
Egress rule with ip protocol 'udp ' , port range 53-53, ip-prefix 'ip DNS server' Egress rule with ip protocol 'udp', port range 53-53, ip-prefix 'ip DNS server'
Ingress rule with ip protocol 'tcp' , port range 53-53, ip-prefix 'ip DNS server' Ingress rule with ip protocol 'tcp', port range 53-53, ip-prefix 'ip DNS server'
Egress rule with ip protocol 'tcp' , port range 53-53, ip-prefix 'ip DNS server' Egress rule with ip protocol 'tcp', port range 53-53, ip-prefix 'ip DNS server'
9. Add rules to SG_man: 9. Add rules to SG_man:
Ingress rule with ip protocol 'tcp' , port range 22-22, ip range 0.0.0.0/0 Ingress rule with ip protocol 'tcp', port range 22-22, ip range 0.0.0.0/0
Egress rule with ip protocol 'tcp' , port range 22-22, ip range 0.0.0.0/0 Egress rule with ip protocol 'tcp', port range 22-22, ip range 0.0.0.0/0
10. Launch following instances in net_1 from image 'ubuntu': 10. Launch following instances in net_1 from image 'ubuntu':
instance 'webserver' of vcenter az with SG_web, SG_DNS instance 'webserver' of vcenter az with SG_web, SG_DNS
instance 'mysqldb ' of vcenter az with SG_db, SG_DNS instance 'mysqldb ' of vcenter az with SG_db, SG_DNS
instance 'manage' of nova az with SG_man, SG_DNS instance 'manage' of nova az with SG_man, SG_DNS
11. Verify that traffic is enabled to instance 'webserver' from internet by http port 80. 11. Verify that traffic is enabled to instance 'webserver' from external network by http port 80.
12. Verify that traffic is enabled to instance 'webserver' from VM 'manage' by tcp port 22. 12. Verify that traffic is enabled to instance 'webserver' from VM 'manage' by tcp port 22.
13. Verify that traffic is enabled to instance 'webserver' from VM 'mysqldb' by tcp port 3306. 13. Verify that traffic is enabled to instance 'webserver' from VM 'mysqldb' by tcp port 3306.
14. Verify that traffic is enabled to internet from instance ' mysqldb' by https port 443. 14. Verify that traffic is enabled to internet from instance 'mysqldb' by https port 443.
15. Verify that traffic is enabled to instance ' mysqldb' from VM 'manage' by tcp port 22. 15. Verify that traffic is enabled to instance 'mysqldb' from VM 'manage' by tcp port 22.
16. Verify that traffic is enabled to instance ' manage' from internet by tcp port 22. 16. Verify that traffic is enabled to instance 'manage' from internet by tcp port 22.
17. Verify that traffic is not enabled to instance ' webserver' from internet by tcp port 22. 17. Verify that traffic is not enabled to instance 'webserver' from internet by tcp port 22.
18. Verify that traffic is not enabled to instance ' mysqldb' from internet by tcp port 3306. 18. Verify that traffic is not enabled to instance 'mysqldb' from internet by tcp port 3306.
19. Verify that traffic is not enabled to instance 'manage' from internet by http port 80. 19. Verify that traffic is not enabled to instance 'manage' from internet by http port 80.
20. Verify that traffic is enabled to all instances from DNS server by udp/tcp port 53 and vice versa. 20. Verify that traffic is enabled to all instances from DNS server by udp/tcp port 53 and vice versa.
@ -901,8 +902,7 @@ Steps
Expected result Expected result
############### ###############
Network traffic is allowed/prohibited to instances according security groups Network traffic is allowed/prohibited to instances according security groups rules.
rules.
Security group rules with remote group id simple. Security group rules with remote group id simple.
@ -918,8 +918,7 @@ dvs_remote_sg_simple
Description Description
########### ###########
Verify that network traffic is allowed/prohibited to instances according security groups Verify that network traffic is allowed/prohibited to instances according security groups rules.
rules.
Complexity Complexity
@ -947,16 +946,15 @@ Steps
Launch 2 instance of nova az with SG1 in net1. Launch 2 instance of nova az with SG1 in net1.
8. Launch 2 instance of vcenter az with SG2 in net1. 8. Launch 2 instance of vcenter az with SG2 in net1.
Launch 2 instance of nova az with SG2 in net1. Launch 2 instance of nova az with SG2 in net1.
9. Verify that icmp ping is enabled between VMs from SG1. 9. Check that instances from SG1 can ping each other.
10. Verify that icmp ping is enabled between instances from SG2. 10. Check that instances from SG2 can ping each other.
11. Verify that icmp ping is not enabled between instances from SG1 and VMs from SG2. 11. Check that instances from SG1 can not ping instances from SG2 and vice versa.
Expected result Expected result
############### ###############
Network traffic is allowed/prohibited to instances according security groups Network traffic is allowed/prohibited to instances according security groups rules.
rules.
Check attached/detached ports with security groups. Check attached/detached ports with security groups.
@ -1007,8 +1005,7 @@ Steps
Expected result Expected result
############### ###############
Verify that network traffic is allowed/prohibited to instances according security groups Verify that network traffic is allowed/prohibited to instances according security groups rules.
rules.
Check launch and remove instances in the one group simultaneously with few security groups. Check launch and remove instances in the one group simultaneously with few security groups.
@ -1043,23 +1040,23 @@ Steps
Egress rule with ip protocol 'icmp', port range any, SG group 'SG1' Egress rule with ip protocol 'icmp', port range any, SG group 'SG1'
Ingress rule with ssh protocol 'tcp', port range 22, SG group 'SG1' Ingress rule with ssh protocol 'tcp', port range 22, SG group 'SG1'
Egress rule with ssh protocol 'tcp', port range 22, SG group 'SG1' Egress rule with ssh protocol 'tcp', port range 22, SG group 'SG1'
4. Create security Sg2 group with rules: 4. Create security SG2 group with rules:
Ingress rule with ssh protocol 'tcp', port range 22, SG group 'SG2' Ingress rule with ssh protocol 'tcp', port range 22, SG group 'SG2'
Egress rule with ssh protocol 'tcp', port range 22, SG group 'SG2' Egress rule with ssh protocol 'tcp', port range 22, SG group 'SG2'
5. Launch a few instances of vcenter availability zone with Default SG +SG1+SG2 in net1 in one batch. 5. Launch a few instances of vcenter availability zone with Default SG + SG1 + SG2 in net_1 in one batch.
6. Launch a few instances of nova availability zone with Default SG +SG1+SG2 in net1 in one batch. 6. Launch a few instances of nova availability zone with Default SG + SG1 + SG2 in net_1 in one batch.
7. Verify that icmp/ssh is enabled between instances. 7. Verify that icmp/ssh is enabled between instances.
8. Remove all instances. 8. Remove all instances.
9. Launch a few instances of nova availability zone with Default SG +SG1+SG2 in net1 in one batch. 9. Launch a few instances of nova availability zone with Default SG + SG1 + SG2 in net_1 in one batch.
10. Launch a few instances of vcenter availability zone with Default SG +SG1+SG2 in net1 in one batch. 10. Launch a few instances of vcenter availability zone with Default SG + SG1 + SG2 in net_1 in one batch.
11. Verify that icmp/ssh is enabled between instances. 11. Verify that icmp/ssh is enabled between instances.
12. Remove all instances.
Expected result Expected result
############### ###############
Verify that network traffic is allowed/prohibited to instances according security groups Verify that network traffic is allowed/prohibited to instances according security groups rules.
rules.
Security group rules with remote ip prefix. Security group rules with remote ip prefix.
@ -1102,18 +1099,17 @@ Steps
Ingress rule with ip protocol 'tcp', port range any, <internal ip of VM2> Ingress rule with ip protocol 'tcp', port range any, <internal ip of VM2>
Egress rule with ip protocol 'tcp', port range any, <internal ip of VM2> Egress rule with ip protocol 'tcp', port range any, <internal ip of VM2>
9. Launch 2 instance 'VM3' and 'VM4' of vcenter az with SG1 and SG2 in net1. 9. Launch 2 instance 'VM3' and 'VM4' of vcenter az with SG1 and SG2 in net1.
Launch 2 instance 'VM5' and 'VM6' of nova az with SG1 and SG2 in net1. Launch 2 instance 'VM5' and 'VM6' of nova az with SG1 and SG2 in net1.
10. Verify that icmp ping is enabled from 'VM3', 'VM4' , 'VM5' and 'VM6' to VM1 and vice versa. 10. Check that instances 'VM3', 'VM4', 'VM5' and 'VM6' can ping VM1 and vice versa.
11. Verify that icmp ping is blocked between 'VM3', 'VM4' , 'VM5' and 'VM6' and vice versa. 11. Check that instances 'VM3', 'VM4', 'VM5' and 'VM6' can not ping each other Verify that icmp ping is blocked between and vice versa.
12. Verify that ssh is enabled from 'VM3', 'VM4' , 'VM5' and 'VM6' to VM2 and vice versa. 12. Verify that ssh is enabled from 'VM3', 'VM4', 'VM5' and 'VM6' to VM2 and vice versa.
13. Verify that ssh is blocked between 'VM3', 'VM4' , 'VM5' and 'VM6' and vice versa. 13. Verify that ssh is blocked between 'VM3', 'VM4', 'VM5' and 'VM6' and vice versa.
Expected result Expected result
############### ###############
Verify that network traffic is allowed/prohibited to instances according security groups Verify that network traffic is allowed/prohibited to instances according security groups rules.
rules.
Fuel create mirror and update core repos on cluster with DVS Fuel create mirror and update core repos on cluster with DVS
@ -1143,14 +1139,14 @@ Steps
1. Setup for system tests 1. Setup for system tests
2. Log into controller node via Fuel CLI and get PID of services which were 2. Log into controller node via Fuel CLI and get PID of services which were
launched by plugin and store them. launched by plugin and store them.
3. Launch the following command on the Fuel Master node: 3. Launch the following command on the Fuel Master node:
`fuel-mirror create -P ubuntu -G mos ubuntu` `fuel-mirror create -P ubuntu -G mos ubuntu`
4. Run the command below on the Fuel Master node: 4. Run the command below on the Fuel Master node:
`fuel-mirror apply -P ubuntu -G mos ubuntu --env <env_id> --replace` `fuel-mirror apply -P ubuntu -G mos ubuntu --env <env_id> --replace`
5. Run the command below on the Fuel Master node: 5. Run the command below on the Fuel Master node:
`fuel --env <env_id> node --node-id <node_ids_separeted_by_coma> --tasks setup_repositories` `fuel --env <env_id> node --node-id <node_ids_separeted_by_coma> --tasks setup_repositories`
And wait until task is done. And wait until task is done.
6. Log into controller node and check plugins services are alive and their PID are not changed. 6. Log into controller node and check plugins services are alive and their PID are not changed.
7. Check all nodes remain in ready status. 7. Check all nodes remain in ready status.
8. Rerun OSTF. 8. Rerun OSTF.
@ -1163,8 +1159,8 @@ Cluster (nodes) should remain in ready state.
OSTF test should be passed on rerun OSTF test should be passed on rerun
Modifying env with DVS plugin(removing/adding controller) Modifying env with DVS plugin (removing/adding controller)
--------------------------------------------------------- ----------------------------------------------------------
ID ID
## ##
@ -1186,7 +1182,7 @@ core
Steps Steps
##### #####
1. Install DVS plugin 1. Install DVS plugin.
2. Create a new environment with following parameters: 2. Create a new environment with following parameters:
* Compute: KVM/QEMU with vCenter * Compute: KVM/QEMU with vCenter
* Networking: Neutron with VLAN segmentation + Neutron with DVS * Networking: Neutron with VLAN segmentation + Neutron with DVS
@ -1202,15 +1198,15 @@ Steps
5. Configure DVS plugin. 5. Configure DVS plugin.
6. Configure VMware vCenter Settings. 6. Configure VMware vCenter Settings.
7. Verify networks. 7. Verify networks.
8. Deploy changes 8. Deploy changes.
9. Run OSTF 9. Run OSTF.
10. Remove controller on which DVS agent is run. 10. Remove controller on which DVS agent is run.
11. Deploy changes 11. Deploy changes.
12. Rerun OSTF 12. Rerun OSTF.
13. Add 1 nodes with controller role to the cluster 13. Add 1 nodes with controller role to the cluster.
14. Verify networks 14. Verify networks.
15. Redeploy changes 15. Redeploy changes.
16. Rerun OSTF 16. Rerun OSTF.
Expected result Expected result
############### ###############
@ -1242,14 +1238,14 @@ Steps
##### #####
1. Set up for system tests. 1. Set up for system tests.
2. Remove compute from the cluster 2. Remove compute from the cluster.
3. Verify networks 3. Verify networks.
4. Deploy changes 4. Deploy changes.
5. Rerun OSTF 5. Rerun OSTF.
6. Add 1 node with compute role to the cluster 6. Add 1 node with compute role to the cluster.
7. Verify networks 7. Verify networks.
8. Redeploy changes 8. Redeploy changes.
9. Rerun OSTF 9. Rerun OSTF.
Expected result Expected result
############### ###############
@ -1280,7 +1276,7 @@ core
Steps Steps
##### #####
1. Install DVS plugin 1. Install DVS plugin.
2. Create a new environment with following parameters: 2. Create a new environment with following parameters:
* Compute: KVM/QEMU with vCenter * Compute: KVM/QEMU with vCenter
* Networking: Neutron with VLAN segmentation * Networking: Neutron with VLAN segmentation
@ -1297,10 +1293,10 @@ Steps
8. Add 1 node with compute-vmware role,configure Nova Compute instance on compute-vmware and redeploy cluster. 8. Add 1 node with compute-vmware role,configure Nova Compute instance on compute-vmware and redeploy cluster.
9. Verify that previously created instance is working. 9. Verify that previously created instance is working.
10. Run OSTF tests. 10. Run OSTF tests.
11. Delete compute-vmware 11. Delete compute-vmware.
12. Redeploy changes 12. Redeploy changes.
13. Verify that previously created instance is working. 13. Verify that previously created instance is working.
14. Run OSTF 14. Run OSTF.
Expected result Expected result
############### ###############

View File

@ -117,33 +117,33 @@ Target Test Items
* Install/uninstall Fuel Vmware-DVS plugin * Install/uninstall Fuel Vmware-DVS plugin
* Deploy Cluster with Fuel Vmware-DVS plugin by Fuel * Deploy Cluster with Fuel Vmware-DVS plugin by Fuel
* Roles of nodes * Roles of nodes
* controller * Controller
* compute * Compute
* cinder * Cinder
* mongo * Mongo
* compute-vmware * Compute-VMware
* cinder-vmware * Cinder-VMware
* Hypervisors: * Hypervisors:
* KVM+Vcenter * KVM + vCenter
* Qemu+Vcenter * Qemu + vCenter
* Storage: * Storage:
* Ceph * Ceph
* Cinder * Cinder
* VMWare vCenter/ESXi datastore for images * VMWare vCenter/ESXi datastore for images
* Network * Network
* Neutron with Vlan segmentation * Neutron with VLAN segmentation
* HA + Neutron with VLAN * HA + Neutron with VLAN
* Additional components * Additional components
* Ceilometer * Ceilometer
* Health Check * Health Check
* Upgrade master node * Upgrade master node
* MOS and VMware-DVS plugin * MOS and VMware-DVS plugin
* Computes(Nova) * Computes (Nova)
* Launch and manage instances * Launch and manage instances
* Launch instances in batch * Launch instances in batch
* Networks (Neutron) * Networks (Neutron)
* Create and manage public and private networks. * Create and manage public and private networks
* Create and manage routers. * Create and manage routers
* Port binding / disabling * Port binding / disabling
* Port security * Port security
* Security groups * Security groups
@ -158,7 +158,7 @@ Target Test Items
* Create and manage projects * Create and manage projects
* Create and manage users * Create and manage users
* Glance * Glance
* Create and manage images * Create and manage images
* GUI * GUI
* Fuel UI * Fuel UI
* CLI * CLI
@ -168,13 +168,13 @@ Target Test Items
Test approach Test approach
************* *************
The project test approach consists of Smoke, Integration, System, Regression The project test approach consists of Smoke, Integration, System, Regression
Failover and Acceptance test levels. Failover and Acceptance test levels.
**Smoke testing** **Smoke testing**
The goal of smoke testing is to ensure that the most critical features of Fuel The goal of smoke testing is to ensure that the most critical features of Fuel
VMware DVS plugin work after new build delivery. Smoke tests will be used by VMware DVS plugin work after new build delivery. Smoke tests will be used by
QA to accept software builds from Development team. QA to accept software builds from Development team.
**Integration and System testing** **Integration and System testing**
@ -185,8 +185,8 @@ without gaps in dataflow.
**Regression testing** **Regression testing**
The goal of regression testing is to verify that key features of Fuel VMware The goal of regression testing is to verify that key features of Fuel VMware
DVS plugin are not affected by any changes performed during preparation to DVS plugin are not affected by any changes performed during preparation to
release (includes defects fixing, new features introduction and possible release (includes defects fixing, new features introduction and possible
updates). updates).
@ -199,7 +199,7 @@ malfunctions with undue loss of data or data integrity.
**Acceptance testing** **Acceptance testing**
The goal of acceptance testing is to ensure that Fuel VMware DVS plugin has The goal of acceptance testing is to ensure that Fuel VMware DVS plugin has
reached a level of stability that meets requirements and acceptance criteria. reached a level of stability that meets requirements and acceptance criteria.
*********************** ***********************
@ -256,7 +256,7 @@ Project testing activities are to be resulted in the following reporting documen
Acceptance criteria Acceptance criteria
=================== ===================
* All acceptance criteria for user stories are met. * All acceptance criteria for user stories are met
* All test cases are executed. BVT tests are passed * All test cases are executed. BVT tests are passed
* Critical and high issues are fixed * Critical and high issues are fixed
* All required documents are delivered * All required documents are delivered
@ -268,4 +268,4 @@ Test cases
.. include:: test_suite_smoke.rst .. include:: test_suite_smoke.rst
.. include:: test_suite_system.rst .. include:: test_suite_system.rst
.. include:: test_suite_failover.rst .. include:: test_suite_failover.rst

View File

@ -40,7 +40,7 @@ TestBasic = fuelweb_test.tests.base_test_case.TestBasic
SetupEnvironment = fuelweb_test.tests.base_test_case.SetupEnvironment SetupEnvironment = fuelweb_test.tests.base_test_case.SetupEnvironment
@test(groups=["plugins", 'dvs_vcenter_plugin', 'dvs_vcenter_system', @test(groups=['plugins', 'dvs_vcenter_plugin', 'dvs_vcenter_system',
'dvs_vcenter_destructive']) 'dvs_vcenter_destructive'])
class TestDVSDestructive(TestBasic): class TestDVSDestructive(TestBasic):
"""Failover test suite. """Failover test suite.
@ -59,19 +59,17 @@ class TestDVSDestructive(TestBasic):
cmds = ['nova-manage service list | grep vcenter-vmcluster1', cmds = ['nova-manage service list | grep vcenter-vmcluster1',
'nova-manage service list | grep vcenter-vmcluster2'] 'nova-manage service list | grep vcenter-vmcluster2']
networks = [ networks = [{
{'name': 'net_1', 'name': 'net_1',
'subnets': [ 'subnets': [
{'name': 'subnet_1', {'name': 'subnet_1',
'cidr': '192.168.112.0/24'} 'cidr': '192.168.112.0/24'}
] ]}, {
}, 'name': 'net_2',
{'name': 'net_2', 'subnets': [
'subnets': [ {'name': 'subnet_1',
{'name': 'subnet_1', 'cidr': '192.168.113.0/24'}
'cidr': '192.168.113.0/24'} ]}
]
}
] ]
# defaults # defaults
@ -90,47 +88,47 @@ class TestDVSDestructive(TestBasic):
:param openstack_ip: type string, openstack ip :param openstack_ip: type string, openstack ip
""" """
admin = os_actions.OpenStackActions( os_conn = os_actions.OpenStackActions(
openstack_ip, SERVTEST_USERNAME, openstack_ip, SERVTEST_USERNAME,
SERVTEST_PASSWORD, SERVTEST_PASSWORD,
SERVTEST_TENANT) SERVTEST_TENANT)
# create security group with rules for ssh and ping # Create security group with rules for ssh and ping
security_group = admin.create_sec_group_for_ssh() security_group = os_conn.create_sec_group_for_ssh()
default_sg = [ _sec_groups = os_conn.neutron.list_security_groups()['security_groups']
sg _serv_tenant_id = os_conn.get_tenant(SERVTEST_TENANT).id
for sg in admin.neutron.list_security_groups()['security_groups'] default_sg = [sg for sg in _sec_groups
if sg['tenant_id'] == admin.get_tenant(SERVTEST_TENANT).id if sg['tenant_id'] == _serv_tenant_id and
if sg['name'] == 'default'][0] sg['name'] == 'default'][0]
network = admin.nova.networks.find(label=self.inter_net_name) network = os_conn.nova.networks.find(label=self.inter_net_name)
# create access point server # Create access point server
access_point, access_point_ip = openstack.create_access_point( _, access_point_ip = openstack.create_access_point(
os_conn=admin, nics=[{'net-id': network.id}], os_conn=os_conn,
nics=[{'net-id': network.id}],
security_groups=[security_group.name, default_sg['name']]) security_groups=[security_group.name, default_sg['name']])
self.show_step(11) self.show_step(11)
self.show_step(12) self.show_step(12)
instances = openstack.create_instances( instances = openstack.create_instances(
os_conn=admin, nics=[{'net-id': network.id}], os_conn=os_conn,
nics=[{'net-id': network.id}],
vm_count=1, vm_count=1,
security_groups=[default_sg['name']]) security_groups=[default_sg['name']])
openstack.verify_instance_state(admin) openstack.verify_instance_state(os_conn)
# Get private ips of instances # Get private ips of instances
ips = [] ips = [os_conn.get_nova_instance_ip(i, net_name=self.inter_net_name)
for instance in instances: for i in instances]
ips.append(admin.get_nova_instance_ip(
instance, net_name=self.inter_net_name))
time.sleep(30) time.sleep(30)
self.show_step(13) self.show_step(13)
openstack.ping_each_other(ips=ips, access_point_ip=access_point_ip) openstack.ping_each_other(ips=ips, access_point_ip=access_point_ip)
self.show_step(14) self.show_step(14)
vcenter_name = [ vcenter_name = [name for name in self.WORKSTATION_NODES
name for name in self.WORKSTATION_NODES if 'vcenter' in name].pop() if 'vcenter' in name].pop()
node = vmrun.Vmrun( node = vmrun.Vmrun(
self.host_type, self.host_type,
self.path_to_vmx_file.format(vcenter_name), self.path_to_vmx_file.format(vcenter_name),
@ -143,13 +141,13 @@ class TestDVSDestructive(TestBasic):
wait(lambda: not icmp_ping(self.VCENTER_IP), wait(lambda: not icmp_ping(self.VCENTER_IP),
interval=1, interval=1,
timeout=10, timeout=10,
timeout_msg='Vcenter is still available.') timeout_msg='vCenter is still available.')
self.show_step(16) self.show_step(16)
wait(lambda: icmp_ping(self.VCENTER_IP), wait(lambda: icmp_ping(self.VCENTER_IP),
interval=5, interval=5,
timeout=120, timeout=120,
timeout_msg='Vcenter is not available.') timeout_msg='vCenter is not available.')
self.show_step(17) self.show_step(17)
openstack.ping_each_other(ips=ips, access_point_ip=access_point_ip) openstack.ping_each_other(ips=ips, access_point_ip=access_point_ip)
@ -163,7 +161,7 @@ class TestDVSDestructive(TestBasic):
Scenario: Scenario:
1. Revert snapshot to dvs_vcenter_systest_setup. 1. Revert snapshot to dvs_vcenter_systest_setup.
2. Try to uninstall dvs plugin. 2. Try to uninstall dvs plugin.
3. Check that plugin is not removed 3. Check that plugin is not removed.
Duration: 1.8 hours Duration: 1.8 hours
@ -178,12 +176,13 @@ class TestDVSDestructive(TestBasic):
self.ssh_manager.execute_on_remote( self.ssh_manager.execute_on_remote(
ip=self.ssh_manager.admin_ip, ip=self.ssh_manager.admin_ip,
cmd=cmd, cmd=cmd,
assert_ec_equal=[1] assert_ec_equal=[1])
)
self.show_step(3) self.show_step(3)
output = self.ssh_manager.execute_on_remote( output = self.ssh_manager.execute_on_remote(
ip=self.ssh_manager.admin_ip, cmd='fuel plugins list')['stdout'] ip=self.ssh_manager.admin_ip,
cmd='fuel plugins list'
)['stdout']
assert_true(plugin.plugin_name in output[-1].split(' '), assert_true(plugin.plugin_name in output[-1].split(' '),
"Plugin '{0}' was removed".format(plugin.plugin_name)) "Plugin '{0}' was removed".format(plugin.plugin_name))
@ -194,19 +193,18 @@ class TestDVSDestructive(TestBasic):
"""Check abilities to bind port on DVS to VM, disable/enable this port. """Check abilities to bind port on DVS to VM, disable/enable this port.
Scenario: Scenario:
1. Revert snapshot to dvs_vcenter_systest_setup 1. Revert snapshot to dvs_vcenter_systest_setup.
2. Create private networks net01 with subnet. 2. Create private networks net01 with subnet.
3. Launch instance VM_1 in the net01 3. Launch instance VM_1 in the net01 with
with image TestVM and flavor m1.micro in nova az. image TestVM and flavor m1.micro in nova az.
4. Launch instance VM_2 in the net01 4. Launch instance VM_2 in the net01 with
with image TestVM-VMDK and flavor m1.micro in vcenter az. image TestVM-VMDK and flavor m1.micro in vcenter az.
5. Disable sub_net port of instances. 5. Disable sub_net port of instances.
6. Check instances are not available. 6. Check instances are not available.
7. Enable sub_net port of all instances. 7. Enable sub_net port of all instances.
8. Verify that instances communicate between each other. 8. Verify that instances communicate between each other.
Send icmp ping between instances. Send icmp ping between instances.
Duration: 1,5 hours Duration: 1,5 hours
""" """
@ -221,22 +219,20 @@ class TestDVSDestructive(TestBasic):
SERVTEST_PASSWORD, SERVTEST_PASSWORD,
SERVTEST_TENANT) SERVTEST_TENANT)
# create security group with rules for ssh and ping # Create security group with rules for ssh and ping
security_group = os_conn.create_sec_group_for_ssh() security_group = os_conn.create_sec_group_for_ssh()
self.show_step(2) self.show_step(2)
net = self.networks[0] net = self.networks[0]
network = os_conn.create_network(network_name=net['name'])['network'] net_1 = os_conn.create_network(network_name=net['name'])['network']
subnet = os_conn.create_subnet( subnet = os_conn.create_subnet(
subnet_name=net['subnets'][0]['name'], subnet_name=net['subnets'][0]['name'],
network_id=network['id'], network_id=net_1['id'],
cidr=net['subnets'][0]['cidr']) cidr=net['subnets'][0]['cidr'])
logger.info("Check network was created.") logger.info("Check network was created.")
assert_true( assert_true(os_conn.get_network(net_1['name'])['id'] == net_1['id'])
os_conn.get_network(network['name'])['id'] == network['id']
)
logger.info("Add net_1 to default router") logger.info("Add net_1 to default router")
router = os_conn.get_router(os_conn.get_network(self.ext_net_name)) router = os_conn.get_router(os_conn.get_network(self.ext_net_name))
@ -246,42 +242,37 @@ class TestDVSDestructive(TestBasic):
self.show_step(3) self.show_step(3)
self.show_step(4) self.show_step(4)
instances = openstack.create_instances( instances = openstack.create_instances(
os_conn=os_conn, nics=[{'net-id': network['id']}], vm_count=1, os_conn=os_conn,
security_groups=[security_group.name] nics=[{'net-id': net_1['id']}],
) vm_count=1,
security_groups=[security_group.name])
openstack.verify_instance_state(os_conn) openstack.verify_instance_state(os_conn)
ports = os_conn.neutron.list_ports()['ports'] ports = os_conn.neutron.list_ports()['ports']
fips = openstack.create_and_assign_floating_ips(os_conn, instances) fips = openstack.create_and_assign_floating_ips(os_conn, instances)
inst_ips = [os_conn.get_nova_instance_ip( inst_ips = [os_conn.get_nova_instance_ip(i, net_name=net_1['name'])
instance, net_name=network['name']) for instance in instances] for i in instances]
inst_ports = [p for p in ports inst_ports = [p for p in ports
if p['fixed_ips'][0]['ip_address'] in inst_ips] if p['fixed_ips'][0]['ip_address'] in inst_ips]
self.show_step(5) self.show_step(5)
_body = {'port': {'admin_state_up': False}}
for port in inst_ports: for port in inst_ports:
os_conn.neutron.update_port( os_conn.neutron.update_port(port=port['id'], body=_body)
port['id'], {'port': {'admin_state_up': False}}
)
self.show_step(6) self.show_step(6)
# TODO(vgorin) create better solution for this step
try: try:
openstack.ping_each_other(fips) openstack.ping_each_other(fips)
checker = 1
except Exception as e: except Exception as e:
logger.info(e) logger.info(e)
checker = 0 else:
if checker:
fail('Ping is available between instances') fail('Ping is available between instances')
self.show_step(7) self.show_step(7)
_body = {'port': {'admin_state_up': True}}
for port in inst_ports: for port in inst_ports:
os_conn.neutron.update_port( os_conn.neutron.update_port(port=port['id'], body=_body)
port['id'], {'port': {'admin_state_up': True}}
)
self.show_step(8) self.show_step(8)
openstack.ping_each_other(fips, timeout=90) openstack.ping_each_other(fips, timeout=90)
@ -290,19 +281,19 @@ class TestDVSDestructive(TestBasic):
groups=["dvs_destructive_setup_2"]) groups=["dvs_destructive_setup_2"])
@log_snapshot_after_test @log_snapshot_after_test
def dvs_destructive_setup_2(self): def dvs_destructive_setup_2(self):
"""Verify that vmclusters should be migrate after reset controller. """Verify that vmclusters migrate after reset controller.
Scenario: Scenario:
1. Upload plugins to the master node 1. Upload plugins to the master node.
2. Install plugin. 2. Install plugin.
3. Configure cluster with 2 vcenter clusters. 3. Configure cluster with 2 vcenter clusters.
4. Add 3 node with controller role. 4. Add 3 node with controller role.
5. Add 2 node with compute role. 5. Add 2 node with compute role.
6. Configure vcenter 6. Configure vcenter.
7. Deploy the cluster. 7. Deploy the cluster.
8. Run smoke OSTF tests 8. Run smoke OSTF tests
9. Launch instances. 1 per az. Assign floating ips. 9. Launch instances. 1 per az. Assign floating ips.
10. Make snapshot 10. Make snapshot.
Duration: 1.8 hours Duration: 1.8 hours
Snapshot: dvs_destructive_setup_2 Snapshot: dvs_destructive_setup_2
@ -325,14 +316,12 @@ class TestDVSDestructive(TestBasic):
self.show_step(3) self.show_step(3)
self.show_step(4) self.show_step(4)
self.fuel_web.update_nodes( self.fuel_web.update_nodes(cluster_id,
cluster_id, {'slave-01': ['controller'],
{'slave-01': ['controller'], 'slave-02': ['controller'],
'slave-02': ['controller'], 'slave-03': ['controller'],
'slave-03': ['controller'], 'slave-04': ['compute'],
'slave-04': ['compute'], 'slave-05': ['compute']})
'slave-05': ['compute']}
)
self.show_step(6) self.show_step(6)
self.fuel_web.vcenter_configure(cluster_id, multiclusters=True) self.fuel_web.vcenter_configure(cluster_id, multiclusters=True)
@ -340,8 +329,7 @@ class TestDVSDestructive(TestBasic):
self.fuel_web.deploy_cluster_wait(cluster_id) self.fuel_web.deploy_cluster_wait(cluster_id)
self.show_step(8) self.show_step(8)
self.fuel_web.run_ostf( self.fuel_web.run_ostf(cluster_id=cluster_id, test_sets=['smoke'])
cluster_id=cluster_id, test_sets=['smoke'])
self.show_step(9) self.show_step(9)
os_ip = self.fuel_web.get_public_vip(cluster_id) os_ip = self.fuel_web.get_public_vip(cluster_id)
@ -354,9 +342,10 @@ class TestDVSDestructive(TestBasic):
network = os_conn.nova.networks.find(label=self.inter_net_name) network = os_conn.nova.networks.find(label=self.inter_net_name)
instances = openstack.create_instances( instances = openstack.create_instances(
os_conn=os_conn, nics=[{'net-id': network.id}], vm_count=1, os_conn=os_conn,
security_groups=[security_group.name] nics=[{'net-id': network.id}],
) vm_count=1,
security_groups=[security_group.name])
openstack.verify_instance_state(os_conn) openstack.verify_instance_state(os_conn)
openstack.create_and_assign_floating_ips(os_conn, instances) openstack.create_and_assign_floating_ips(os_conn, instances)
@ -367,16 +356,16 @@ class TestDVSDestructive(TestBasic):
groups=["dvs_vcenter_reset_controller"]) groups=["dvs_vcenter_reset_controller"])
@log_snapshot_after_test @log_snapshot_after_test
def dvs_vcenter_reset_controller(self): def dvs_vcenter_reset_controller(self):
"""Verify that vmclusters should be migrate after reset controller. """Verify that vmclusters migrate after reset controller.
Scenario: Scenario:
1. Revert to 'dvs_destructive_setup_2' snapshot. 1. Revert to 'dvs_destructive_setup_2' snapshot.
2. Verify connection between instances. Send ping, 2. Verify connection between instances. Send ping,
check that ping get reply check that ping get reply.
3. Reset controller. 3. Reset controller.
4. Check that vmclusters migrate to another controller. 4. Check that vmclusters migrate to another controller.
5. Verify connection between instances. 5. Verify connection between instances. Send ping, check that
Send ping, check that ping get reply ping get reply.
Duration: 1.8 hours Duration: 1.8 hours
@ -393,11 +382,9 @@ class TestDVSDestructive(TestBasic):
self.show_step(2) self.show_step(2)
srv_list = os_conn.get_servers() srv_list = os_conn.get_servers()
fips = [] fips = [os_conn.get_nova_instance_ip(s, net_name=self.inter_net_name,
for srv in srv_list: addrtype='floating')
fips.append(os_conn.get_nova_instance_ip( for s in srv_list]
srv, net_name=self.inter_net_name, addrtype='floating'))
openstack.ping_each_other(fips) openstack.ping_each_other(fips)
d_ctrl = self.fuel_web.get_nailgun_primary_node( d_ctrl = self.fuel_web.get_nailgun_primary_node(
@ -417,16 +404,16 @@ class TestDVSDestructive(TestBasic):
groups=["dvs_vcenter_shutdown_controller"]) groups=["dvs_vcenter_shutdown_controller"])
@log_snapshot_after_test @log_snapshot_after_test
def dvs_vcenter_shutdown_controller(self): def dvs_vcenter_shutdown_controller(self):
"""Verify that vmclusters should be migrate after shutdown controller. """Verify that vmclusters migrate after shutdown controller.
Scenario: Scenario:
1. Revert to 'dvs_destructive_setup_2' snapshot. 1. Revert to 'dvs_destructive_setup_2' snapshot.
2. Verify connection between instances. Send ping, 2. Verify connection between instances. Send ping,
check that ping get reply. check that ping get reply.
3. Shutdown controller. 3. Shutdown controller.
4. Check that vmclusters should be migrate to another controller. 4. Check that vmclusters migrate to another controller.
5. Verify connection between instances. 5. Verify connection between instances.
Send ping, check that ping get reply Send ping, check that ping get reply
Duration: 1.8 hours Duration: 1.8 hours
@ -443,10 +430,9 @@ class TestDVSDestructive(TestBasic):
self.show_step(2) self.show_step(2)
srv_list = os_conn.get_servers() srv_list = os_conn.get_servers()
fips = [] fips = [os_conn.get_nova_instance_ip(
for srv in srv_list: srv, net_name=self.inter_net_name, addrtype='floating')
fips.append(os_conn.get_nova_instance_ip( for srv in srv_list]
srv, net_name=self.inter_net_name, addrtype='floating'))
openstack.ping_each_other(fips) openstack.ping_each_other(fips)
n_ctrls = self.fuel_web.get_nailgun_cluster_nodes_by_roles( n_ctrls = self.fuel_web.get_nailgun_cluster_nodes_by_roles(
@ -469,7 +455,7 @@ class TestDVSDestructive(TestBasic):
groups=["dvs_reboot_vcenter_1"]) groups=["dvs_reboot_vcenter_1"])
@log_snapshot_after_test @log_snapshot_after_test
def dvs_reboot_vcenter_1(self): def dvs_reboot_vcenter_1(self):
"""Verify that vmclusters should be migrate after reset controller. """Verify that vmclusters migrate after reset controller.
Scenario: Scenario:
1. Install DVS plugin on master node. 1. Install DVS plugin on master node.
@ -495,15 +481,14 @@ class TestDVSDestructive(TestBasic):
and flavor m1.micro. and flavor m1.micro.
12. Launch instance VM_2 with image TestVM-VMDK, availability zone 12. Launch instance VM_2 with image TestVM-VMDK, availability zone
vcenter and flavor m1.micro. vcenter and flavor m1.micro.
13. Check connection between instances, send ping from VM_1 to VM_2 13. Verify connection between instances: check that VM_1 and VM_2
and vice verse. can ping each other.
14. Reboot vcenter. 14. Reboot vcenter.
15. Check that controller lost connection with vCenter. 15. Check that controller lost connection with vCenter.
16. Wait for vCenter. 16. Wait for vCenter.
17. Ensure that all instances from vCenter displayed in dashboard. 17. Ensure that all instances from vCenter displayed in dashboard.
18. Run OSTF. 18. Run OSTF.
Duration: 2.5 hours Duration: 2.5 hours
""" """
@ -525,13 +510,11 @@ class TestDVSDestructive(TestBasic):
self.show_step(3) self.show_step(3)
self.show_step(4) self.show_step(4)
self.show_step(5) self.show_step(5)
self.fuel_web.update_nodes( self.fuel_web.update_nodes(cluster_id,
cluster_id, {'slave-01': ['controller'],
{'slave-01': ['controller'], 'slave-02': ['compute'],
'slave-02': ['compute'], 'slave-03': ['cinder-vmware'],
'slave-03': ['cinder-vmware'], 'slave-04': ['cinder']})
'slave-04': ['cinder']}
)
self.show_step(6) self.show_step(6)
plugin.enable_plugin(cluster_id, self.fuel_web) plugin.enable_plugin(cluster_id, self.fuel_web)
@ -558,7 +541,7 @@ class TestDVSDestructive(TestBasic):
groups=["dvs_reboot_vcenter_2"]) groups=["dvs_reboot_vcenter_2"])
@log_snapshot_after_test @log_snapshot_after_test
def dvs_reboot_vcenter_2(self): def dvs_reboot_vcenter_2(self):
"""Verify that vmclusters should be migrate after reset controller. """Verify that vmclusters migrate after reset controller.
Scenario: Scenario:
1. Install DVS plugin on master node. 1. Install DVS plugin on master node.
@ -585,15 +568,14 @@ class TestDVSDestructive(TestBasic):
and flavor m1.micro. and flavor m1.micro.
12. Launch instance VM_2 with image TestVM-VMDK, availability zone 12. Launch instance VM_2 with image TestVM-VMDK, availability zone
vcenter and flavor m1.micro. vcenter and flavor m1.micro.
13. Check connection between instances, send ping from VM_1 to VM_2 13. Verify connection between instances: check that VM_1 to VM_2
and vice verse. can ping each other.
14. Reboot vcenter. 14. Reboot vcenter.
15. Check that controller lost connection with vCenter. 15. Check that controller lost connection with vCenter.
16. Wait for vCenter. 16. Wait for vCenter.
17. Ensure that all instances from vCenter displayed in dashboard. 17. Ensure that all instances from vCenter displayed in dashboard.
18. Run Smoke OSTF. 18. Run Smoke OSTF.
Duration: 2.5 hours Duration: 2.5 hours
""" """
@ -615,25 +597,21 @@ class TestDVSDestructive(TestBasic):
self.show_step(3) self.show_step(3)
self.show_step(4) self.show_step(4)
self.show_step(5) self.show_step(5)
self.fuel_web.update_nodes( self.fuel_web.update_nodes(cluster_id,
cluster_id, {'slave-01': ['controller'],
{'slave-01': ['controller'], 'slave-02': ['compute'],
'slave-02': ['compute'], 'slave-03': ['cinder-vmware'],
'slave-03': ['cinder-vmware'], 'slave-04': ['cinder'],
'slave-04': ['cinder'], 'slave-05': ['compute-vmware']})
'slave-05': ['compute-vmware']}
)
self.show_step(6) self.show_step(6)
plugin.enable_plugin(cluster_id, self.fuel_web) plugin.enable_plugin(cluster_id, self.fuel_web)
self.show_step(7) self.show_step(7)
target_node_1 = self.node_name('slave-05') target_node_1 = self.node_name('slave-05')
self.fuel_web.vcenter_configure( self.fuel_web.vcenter_configure(cluster_id,
cluster_id, target_node_1=target_node_1,
target_node_1=target_node_1, multiclusters=False)
multiclusters=False
)
self.show_step(8) self.show_step(8)
self.fuel_web.verify_network(cluster_id) self.fuel_web.verify_network(cluster_id)

View File

@ -54,22 +54,16 @@ class TestDVSMaintenance(TestBasic):
"""Deploy cluster with plugin and vmware datastore backend. """Deploy cluster with plugin and vmware datastore backend.
Scenario: Scenario:
1. Upload plugins to the master node 1. Revert to dvs_bvt snapshot.
2. Install plugin. 2. Create non default network net_1.
3. Create cluster with vcenter. 3. Launch instances with created network in nova and vcenter az.
4. Add 3 node with controller role. 4. Create Security groups.
5. Add 2 node with compute + ceph role. 5. Attached created security groups to instances.
6. Add 1 node with compute-vmware + cinder vmware role. 6. Check connection between instances from different az.
7. Deploy the cluster.
8. Run OSTF.
9. Create non default network.
10. Create Security groups
11. Launch instances with created network in nova and vcenter az.
12. Attached created security groups to instances.
13. Check connection between instances from different az.
Duration: 1.8 hours Duration: 1.8 hours
""" """
self.show_step(1)
self.env.revert_snapshot("dvs_bvt") self.env.revert_snapshot("dvs_bvt")
cluster_id = self.fuel_web.get_last_created_cluster() cluster_id = self.fuel_web.get_last_created_cluster()
@ -81,80 +75,81 @@ class TestDVSMaintenance(TestBasic):
SERVTEST_TENANT) SERVTEST_TENANT)
tenant = os_conn.get_tenant(SERVTEST_TENANT) tenant = os_conn.get_tenant(SERVTEST_TENANT)
# Create non default network with subnet.
# Create non default network with subnet
self.show_step(2)
logger.info('Create network {}'.format(self.net_data[0].keys()[0])) logger.info('Create network {}'.format(self.net_data[0].keys()[0]))
network = os_conn.create_network( net_1 = os_conn.create_network(
network_name=self.net_data[0].keys()[0], network_name=self.net_data[0].keys()[0],
tenant_id=tenant.id)['network'] tenant_id=tenant.id)['network']
subnet = os_conn.create_subnet( subnet = os_conn.create_subnet(
subnet_name=network['name'], subnet_name=net_1['name'],
network_id=network['id'], network_id=net_1['id'],
cidr=self.net_data[0][self.net_data[0].keys()[0]], cidr=self.net_data[0][self.net_data[0].keys()[0]],
ip_version=4) ip_version=4)
# Check that network are created. # Check that network are created
assert_true( assert_true(os_conn.get_network(net_1['name'])['id'] == net_1['id'])
os_conn.get_network(network['name'])['id'] == network['id']
)
# Add net_1 to default router # Add net_1 to default router
router = os_conn.get_router(os_conn.get_network(self.ext_net_name)) router = os_conn.get_router(os_conn.get_network(self.ext_net_name))
os_conn.add_router_interface( os_conn.add_router_interface(router_id=router["id"],
router_id=router["id"], subnet_id=subnet["id"])
subnet_id=subnet["id"])
# Launch instance 2 VMs of vcenter and 2 VMs of nova self.show_step(3)
# in the tenant network net_01
openstack.create_instances( # Launch 2 vcenter VMs and 2 nova VMs in the tenant network net_01
os_conn=os_conn, vm_count=1, openstack.create_instances(os_conn=os_conn,
nics=[{'net-id': network['id']}] vm_count=1,
) nics=[{'net-id': net_1['id']}])
# Launch instance 2 VMs of vcenter and 2 VMs of nova
# in the default network # Launch 2 vcenter VMs and 2 nova VMs in the default network
network = os_conn.nova.networks.find(label=self.inter_net_name) net_1 = os_conn.nova.networks.find(label=self.inter_net_name)
instances = openstack.create_instances( instances = openstack.create_instances(os_conn=os_conn,
os_conn=os_conn, vm_count=1, vm_count=1,
nics=[{'net-id': network.id}]) nics=[{'net-id': net_1.id}])
openstack.verify_instance_state(os_conn) openstack.verify_instance_state(os_conn)
self.show_step(4)
# Create security groups SG_1 to allow ICMP traffic. # Create security groups SG_1 to allow ICMP traffic.
# Add Ingress rule for ICMP protocol to SG_1 # Add Ingress rule for ICMP protocol to SG_1
# Create security groups SG_2 to allow TCP traffic 22 port. # Create security groups SG_2 to allow TCP traffic 22 port.
# Add Ingress rule for TCP protocol to SG_2 # Add Ingress rule for TCP protocol to SG_2
sec_name = ['SG1', 'SG2'] sec_name = ['SG1', 'SG2']
sg1 = os_conn.nova.security_groups.create( sg1 = os_conn.nova.security_groups.create(sec_name[0], "descr")
sec_name[0], "descr") sg2 = os_conn.nova.security_groups.create(sec_name[1], "descr")
sg2 = os_conn.nova.security_groups.create( rulesets = [{
sec_name[1], "descr") # ssh
rulesets = [ 'ip_protocol': 'tcp',
{ 'from_port': 22,
# ssh 'to_port': 22,
'ip_protocol': 'tcp', 'cidr': '0.0.0.0/0',
'from_port': 22, }, {
'to_port': 22, # ping
'cidr': '0.0.0.0/0', 'ip_protocol': 'icmp',
}, 'from_port': -1,
{ 'to_port': -1,
# ping 'cidr': '0.0.0.0/0',
'ip_protocol': 'icmp', }]
'from_port': -1, os_conn.nova.security_group_rules.create(sg1.id, **rulesets[0])
'to_port': -1, os_conn.nova.security_group_rules.create(sg2.id, **rulesets[1])
'cidr': '0.0.0.0/0',
}
]
os_conn.nova.security_group_rules.create(
sg1.id, **rulesets[0]
)
os_conn.nova.security_group_rules.create(
sg2.id, **rulesets[1]
)
# Remove default security group and attach SG_1 and SG2 to VMs # Remove default security group and attach SG_1 and SG2 to VMs
self.show_step(5)
srv_list = os_conn.get_servers() srv_list = os_conn.get_servers()
for srv in srv_list: for srv in srv_list:
srv.remove_security_group(srv.security_groups[0]['name']) srv.remove_security_group(srv.security_groups[0]['name'])
srv.add_security_group(sg1.id) srv.add_security_group(sg1.id)
srv.add_security_group(sg2.id) srv.add_security_group(sg2.id)
fip = openstack.create_and_assign_floating_ips(os_conn, instances) fip = openstack.create_and_assign_floating_ips(os_conn, instances)
# Check ping between VMs # Check ping between VMs
self.show_step(6)
ip_pair = dict.fromkeys(fip) ip_pair = dict.fromkeys(fip)
for key in ip_pair: for key in ip_pair:
ip_pair[key] = [value for value in fip if key != value] ip_pair[key] = [value for value in fip if key != value]

View File

@ -26,12 +26,12 @@ TestBasic = fuelweb_test.tests.base_test_case.TestBasic
SetupEnvironment = fuelweb_test.tests.base_test_case.SetupEnvironment SetupEnvironment = fuelweb_test.tests.base_test_case.SetupEnvironment
@test(groups=["plugins", 'dvs_vcenter_plugin', 'dvs_vcenter_smoke']) @test(groups=['plugins', 'dvs_vcenter_plugin', 'dvs_vcenter_smoke'])
class TestDVSSmoke(TestBasic): class TestDVSSmoke(TestBasic):
"""Smoke test suite. """Smoke test suite.
The goal of smoke testing is to ensure that the most critical features The goal of smoke testing is to ensure that the most critical features
of Fuel VMware DVS plugin work after new build delivery. Smoke tests of Fuel VMware DVS plugin work after new build delivery. Smoke tests
will be used by QA to accept software builds from Development team. will be used by QA to accept software builds from Development team.
""" """
@ -42,7 +42,7 @@ class TestDVSSmoke(TestBasic):
"""Check that plugin can be installed. """Check that plugin can be installed.
Scenario: Scenario:
1. Upload plugins to the master node 1. Upload plugins to the master node.
2. Install plugin. 2. Install plugin.
3. Ensure that plugin is installed successfully using cli, 3. Ensure that plugin is installed successfully using cli,
run command 'fuel plugins'. Check name, version of plugin. run command 'fuel plugins'. Check name, version of plugin.
@ -59,19 +59,17 @@ class TestDVSSmoke(TestBasic):
cmd = 'fuel plugins list' cmd = 'fuel plugins list'
output = self.ssh_manager.execute_on_remote( output = self.ssh_manager.execute_on_remote(
ip=self.ssh_manager.admin_ip, ip=self.ssh_manager.admin_ip, cmd=cmd
cmd=cmd)['stdout'].pop().split(' ') )['stdout'].pop().split(' ')
# check name # check name
assert_true( assert_true(
plugin.plugin_name in output, plugin.plugin_name in output,
"Plugin '{0}' is not installed.".format(plugin.plugin_name) "Plugin '{0}' is not installed.".format(plugin.plugin_name))
)
# check version # check version
assert_true( assert_true(
plugin.DVS_PLUGIN_VERSION in output, plugin.DVS_PLUGIN_VERSION in output,
"Plugin '{0}' is not installed.".format(plugin.plugin_name) "Plugin '{0}' is not installed.".format(plugin.plugin_name))
)
self.env.make_snapshot("dvs_install", is_make=True) self.env.make_snapshot("dvs_install", is_make=True)
@test(depends_on=[dvs_install], @test(depends_on=[dvs_install],
@ -85,7 +83,6 @@ class TestDVSSmoke(TestBasic):
2. Remove plugin. 2. Remove plugin.
3. Verify that plugin is removed, run command 'fuel plugins'. 3. Verify that plugin is removed, run command 'fuel plugins'.
Duration: 5 min Duration: 5 min
""" """
@ -96,21 +93,17 @@ class TestDVSSmoke(TestBasic):
cmd = 'fuel plugins --remove {0}=={1}'.format( cmd = 'fuel plugins --remove {0}=={1}'.format(
plugin.plugin_name, plugin.DVS_PLUGIN_VERSION) plugin.plugin_name, plugin.DVS_PLUGIN_VERSION)
self.ssh_manager.execute_on_remote( self.ssh_manager.execute_on_remote(ip=self.ssh_manager.admin_ip,
ip=self.ssh_manager.admin_ip, cmd=cmd,
cmd=cmd, err_msg='Can not remove plugin.')
err_msg='Can not remove plugin.'
)
self.show_step(3) self.show_step(3)
output = self.ssh_manager.execute_on_remote( output = self.ssh_manager.execute_on_remote(
ip=self.ssh_manager.admin_ip, ip=self.ssh_manager.admin_ip,
cmd='fuel plugins list')['stdout'].pop().split(' ') cmd='fuel plugins list')['stdout'].pop().split(' ')
assert_true( assert_true(plugin.plugin_name not in output,
plugin.plugin_name not in output, "Plugin '{0}' is not removed".format(plugin.plugin_name))
"Plugin '{0}' is not removed".format(plugin.plugin_name)
)
@test(depends_on=[dvs_install], @test(depends_on=[dvs_install],
groups=["dvs_vcenter_smoke"]) groups=["dvs_vcenter_smoke"])
@ -119,7 +112,7 @@ class TestDVSSmoke(TestBasic):
"""Check deployment with VMware DVS plugin and one controller. """Check deployment with VMware DVS plugin and one controller.
Scenario: Scenario:
1. Upload plugins to the master node 1. Upload plugins to the master node.
2. Install plugin. 2. Install plugin.
3. Create a new environment with following parameters: 3. Create a new environment with following parameters:
* Compute: KVM/QEMU with vCenter * Compute: KVM/QEMU with vCenter
@ -130,7 +123,7 @@ class TestDVSSmoke(TestBasic):
5. Configure interfaces on nodes. 5. Configure interfaces on nodes.
6. Configure network settings. 6. Configure network settings.
7. Enable and configure DVS plugin. 7. Enable and configure DVS plugin.
8 Configure VMware vCenter Settings. 8. Configure VMware vCenter Settings.
Add 1 vSphere clusters and configure Nova Compute instances Add 1 vSphere clusters and configure Nova Compute instances
on controllers. on controllers.
9. Deploy the cluster. 9. Deploy the cluster.
@ -139,9 +132,13 @@ class TestDVSSmoke(TestBasic):
Duration: 1.8 hours Duration: 1.8 hours
""" """
self.show_step(1)
self.show_step(2)
self.env.revert_snapshot("dvs_install") self.env.revert_snapshot("dvs_install")
# Configure cluster with 2 vcenter clusters # Configure cluster with 2 vcenter clusters
self.show_step(3)
cluster_id = self.fuel_web.create_cluster( cluster_id = self.fuel_web.create_cluster(
name=self.__class__.__name__, name=self.__class__.__name__,
mode=DEPLOYMENT_MODE, mode=DEPLOYMENT_MODE,
@ -150,22 +147,24 @@ class TestDVSSmoke(TestBasic):
"net_segment_type": NEUTRON_SEGMENT_TYPE "net_segment_type": NEUTRON_SEGMENT_TYPE
} }
) )
plugin.enable_plugin( plugin.enable_plugin(cluster_id, self.fuel_web, multiclusters=False)
cluster_id, self.fuel_web, multiclusters=False)
# Assign role to node # Assign role to node
self.fuel_web.update_nodes( self.show_step(4)
cluster_id, self.fuel_web.update_nodes(cluster_id, {'slave-01': ['controller']})
{'slave-01': ['controller']}
)
# Configure VMWare vCenter settings # Configure VMWare vCenter settings
self.show_step(5)
self.show_step(6)
self.show_step(7)
self.show_step(8)
self.fuel_web.vcenter_configure(cluster_id) self.fuel_web.vcenter_configure(cluster_id)
self.show_step(9)
self.fuel_web.deploy_cluster_wait(cluster_id) self.fuel_web.deploy_cluster_wait(cluster_id)
self.fuel_web.run_ostf( self.show_step(10)
cluster_id=cluster_id, test_sets=['smoke']) self.fuel_web.run_ostf(cluster_id=cluster_id, test_sets=['smoke'])
@test(groups=["plugins", 'dvs_vcenter_bvt']) @test(groups=["plugins", 'dvs_vcenter_bvt'])
@ -179,7 +178,7 @@ class TestDVSBVT(TestBasic):
"""Deploy cluster with DVS plugin and ceph storage. """Deploy cluster with DVS plugin and ceph storage.
Scenario: Scenario:
1. Upload plugins to the master node 1. Upload plugins to the master node.
2. Install plugin. 2. Install plugin.
3. Create a new environment with following parameters: 3. Create a new environment with following parameters:
* Compute: KVM/QEMU with vCenter * Compute: KVM/QEMU with vCenter
@ -209,10 +208,13 @@ class TestDVSBVT(TestBasic):
""" """
self.env.revert_snapshot("ready_with_9_slaves") self.env.revert_snapshot("ready_with_9_slaves")
plugin.install_dvs_plugin( self.show_step(1)
self.ssh_manager.admin_ip) self.show_step(2)
plugin.install_dvs_plugin(self.ssh_manager.admin_ip)
# Configure cluster with 2 vcenter clusters and vcenter glance # Configure cluster with 2 vcenter clusters and vcenter glance
self.show_step(3)
cluster_id = self.fuel_web.create_cluster( cluster_id = self.fuel_web.create_cluster(
name=self.__class__.__name__, name=self.__class__.__name__,
mode=DEPLOYMENT_MODE, mode=DEPLOYMENT_MODE,
@ -227,7 +229,9 @@ class TestDVSBVT(TestBasic):
) )
plugin.enable_plugin(cluster_id, self.fuel_web) plugin.enable_plugin(cluster_id, self.fuel_web)
# Assign role to node # Assign roles to nodes
self.show_step(4)
self.fuel_web.update_nodes( self.fuel_web.update_nodes(
cluster_id, cluster_id,
{'slave-01': ['controller'], {'slave-01': ['controller'],
@ -236,22 +240,26 @@ class TestDVSBVT(TestBasic):
'slave-04': ['compute', 'ceph-osd'], 'slave-04': ['compute', 'ceph-osd'],
'slave-05': ['compute', 'ceph-osd'], 'slave-05': ['compute', 'ceph-osd'],
'slave-06': ['compute', 'ceph-osd'], 'slave-06': ['compute', 'ceph-osd'],
'slave-07': ['compute-vmware', 'cinder-vmware']} 'slave-07': ['compute-vmware', 'cinder-vmware']})
)
# Configure VMWare vCenter settings # Configure VMWare vCenter settings
self.show_step(5)
self.show_step(6)
self.show_step(7)
target_node_2 = self.fuel_web.get_nailgun_node_by_name('slave-07') target_node_2 = self.fuel_web.get_nailgun_node_by_name('slave-07')
target_node_2 = target_node_2['hostname'] target_node_2 = target_node_2['hostname']
self.fuel_web.vcenter_configure( self.fuel_web.vcenter_configure(cluster_id,
cluster_id, target_node_2=target_node_2,
target_node_2=target_node_2, multiclusters=True)
multiclusters=True
)
self.show_step(8)
self.fuel_web.verify_network(cluster_id, timeout=60 * 15) self.fuel_web.verify_network(cluster_id, timeout=60 * 15)
self.show_step(9)
self.fuel_web.deploy_cluster_wait(cluster_id, timeout=3600 * 3) self.fuel_web.deploy_cluster_wait(cluster_id, timeout=3600 * 3)
self.fuel_web.run_ostf( self.show_step(10)
cluster_id=cluster_id, test_sets=['smoke']) self.fuel_web.run_ostf(cluster_id=cluster_id, test_sets=['smoke'])
self.env.make_snapshot("dvs_bvt", is_make=True) self.env.make_snapshot("dvs_bvt", is_make=True)

File diff suppressed because it is too large Load Diff

View File

@ -41,7 +41,7 @@ class TestNetworkTemplates(TestNetworkTemplatesBase, TestBasic):
return self.fuel_web.get_nailgun_node_by_name(name_node)['hostname'] return self.fuel_web.get_nailgun_node_by_name(name_node)['hostname']
def get_network_template(self, template_name): def get_network_template(self, template_name):
"""Get netwok template. """Get network template.
param: template_name: type string, name of file param: template_name: type string, name of file
""" """
@ -61,7 +61,7 @@ class TestNetworkTemplates(TestNetworkTemplatesBase, TestBasic):
1. Upload plugins to the master node. 1. Upload plugins to the master node.
2. Install plugin. 2. Install plugin.
3. Create cluster with vcenter. 3. Create cluster with vcenter.
4. Set CephOSD as backend for Glance and Cinder 4. Set CephOSD as backend for Glance and Cinder.
5. Add nodes with following roles: 5. Add nodes with following roles:
controller controller
compute-vmware compute-vmware
@ -70,8 +70,8 @@ class TestNetworkTemplates(TestNetworkTemplatesBase, TestBasic):
3 ceph-osd 3 ceph-osd
6. Upload network template. 6. Upload network template.
7. Check network configuration. 7. Check network configuration.
8. Deploy the cluster 8. Deploy the cluster.
9. Run OSTF 9. Run OSTF.
Duration 2.5 hours Duration 2.5 hours
@ -80,9 +80,12 @@ class TestNetworkTemplates(TestNetworkTemplatesBase, TestBasic):
""" """
self.env.revert_snapshot("ready_with_9_slaves") self.env.revert_snapshot("ready_with_9_slaves")
plugin.install_dvs_plugin( self.show_step(1)
self.ssh_manager.admin_ip) self.show_step(2)
plugin.install_dvs_plugin(self.ssh_manager.admin_ip)
self.show_step(3)
self.show_step(4)
cluster_id = self.fuel_web.create_cluster( cluster_id = self.fuel_web.create_cluster(
name=self.__class__.__name__, name=self.__class__.__name__,
mode=DEPLOYMENT_MODE, mode=DEPLOYMENT_MODE,
@ -101,29 +104,26 @@ class TestNetworkTemplates(TestNetworkTemplatesBase, TestBasic):
plugin.enable_plugin(cluster_id, self.fuel_web) plugin.enable_plugin(cluster_id, self.fuel_web)
self.fuel_web.update_nodes( self.show_step(5)
cluster_id, self.fuel_web.update_nodes(cluster_id,
{ {'slave-01': ['controller'],
'slave-01': ['controller'], 'slave-02': ['compute-vmware'],
'slave-02': ['compute-vmware'], 'slave-03': ['compute-vmware'],
'slave-03': ['compute-vmware'], 'slave-04': ['compute'],
'slave-04': ['compute'], 'slave-05': ['ceph-osd'],
'slave-05': ['ceph-osd'], 'slave-06': ['ceph-osd'],
'slave-06': ['ceph-osd'], 'slave-07': ['ceph-osd'],},
'slave-07': ['ceph-osd'], update_interfaces=False)
},
update_interfaces=False
)
# Configure VMWare vCenter settings # Configure VMWare vCenter settings
self.show_step(6)
target_node_1 = self.node_name('slave-02') target_node_1 = self.node_name('slave-02')
target_node_2 = self.node_name('slave-03') target_node_2 = self.node_name('slave-03')
self.fuel_web.vcenter_configure( self.fuel_web.vcenter_configure(cluster_id,
cluster_id, target_node_1=target_node_1,
target_node_1=target_node_1, target_node_2=target_node_2,
target_node_2=target_node_2, multiclusters=True)
multiclusters=True
)
network_template = self.get_network_template('default') network_template = self.get_network_template('default')
self.fuel_web.client.upload_network_template( self.fuel_web.client.upload_network_template(
@ -138,21 +138,24 @@ class TestNetworkTemplates(TestNetworkTemplatesBase, TestBasic):
logger.debug('Networks: {0}'.format( logger.debug('Networks: {0}'.format(
self.fuel_web.client.get_network_groups())) self.fuel_web.client.get_network_groups()))
self.show_step(7)
self.fuel_web.verify_network(cluster_id) self.fuel_web.verify_network(cluster_id)
self.show_step(8)
self.fuel_web.deploy_cluster_wait(cluster_id, timeout=180 * 60) self.fuel_web.deploy_cluster_wait(cluster_id, timeout=180 * 60)
self.fuel_web.verify_network(cluster_id) self.fuel_web.verify_network(cluster_id)
self.check_ipconfig_for_template(cluster_id, network_template, self.check_ipconfig_for_template(
networks) cluster_id, network_template, networks)
self.check_services_networks(cluster_id, network_template) self.check_services_networks(cluster_id, network_template)
self.fuel_web.run_ostf(cluster_id=cluster_id, self.show_step(9)
timeout=3600, self.fuel_web.run_ostf(
test_sets=['smoke', 'sanity', cluster_id=cluster_id,
'ha', 'tests_platform']) timeout=3600,
self.check_ipconfig_for_template(cluster_id, network_template, test_sets=['smoke', 'sanity', 'ha', 'tests_platform'])
networks)
self.check_ipconfig_for_template(
cluster_id, network_template, networks)
self.check_services_networks(cluster_id, network_template) self.check_services_networks(cluster_id, network_template)