Browse Source

Refactor code of vmware dvs tests

* do code more readable
* fix most of misprints in docs

Change-Id: I56637bd15b9491cd891e1be95f48c4cd61f372ca
ekhomyakova 2 years ago
parent
commit
edbe780e60

+ 16
- 20
docs/test_plan/source/test_suite_failover.rst View File

@@ -64,7 +64,7 @@ Steps
64 64
 #####
65 65
 
66 66
     1. Install DVS plugin on master node.
67
-    2. Create a new environment with following parameters:
67
+    2. Create a new environment with the following parameters:
68 68
         * Compute: KVM/QEMU with vCenter
69 69
         * Networking: Neutron with VLAN segmentation
70 70
         * Storage: default
@@ -81,14 +81,12 @@ Steps
81 81
     7. Configure VMware vCenter Settings. Add 2 vSphere clusters and configure Nova Compute instances on controllers.
82 82
     8. Verify networks.
83 83
     9. Deploy cluster.
84
-    10. Run OSTF
84
+    10. Run OSTF.
85 85
     11. Launch instances in nova and vcenter availability zones.
86
-    12. Verify connection between instances. Send ping.
87
-        Check that ping get reply.
86
+    12. Verify connection between instances: check that instances can ping each other.
88 87
     13. Shutdown controller with vmclusters.
89 88
     14. Check that vcenter-vmcluster migrates to another controller.
90
-    15. Verify connection between instances.
91
-        Send ping, check that ping get reply.
89
+    15. Verify connection between instances: check that instances can ping each other.
92 90
 
93 91
 
94 92
 Expected result
@@ -123,7 +121,7 @@ Steps
123 121
 #####
124 122
 
125 123
     1. Install DVS plugin on master node.
126
-    2. Create a new environment with following parameters:
124
+    2. Create a new environment with the following parameters:
127 125
         * Compute: KVM/QEMU with vCenter
128 126
         * Networking: Neutron with VLAN segmentation
129 127
         * Storage: default
@@ -141,9 +139,9 @@ Steps
141 139
     8. Verify networks.
142 140
     9. Deploy cluster.
143 141
     10. Run OSTF.
144
-    11. Launch instance VM_1 with image TestVM, availability zone nova and flavor m1.micro.
145
-    12. Launch instance VM_2  with image TestVM-VMDK, availability zone vcenter and flavor m1.micro.
146
-    13. Check connection between instances, send ping from VM_1 to VM_2 and vice verse.
142
+    11. Launch instance VM_1 from image TestVM, with availability zone nova and flavor m1.micro.
143
+    12. Launch instance VM_2 from image TestVM-VMDK, with availability zone vcenter and flavor m1.micro.
144
+    13. Verify connection between instances: check that VM_1 and VM_2 can ping each other.
147 145
     14. Reboot vcenter.
148 146
     15. Check that controller lost connection with vCenter.
149 147
     16. Wait for vCenter.
@@ -202,10 +200,10 @@ Steps
202 200
     8. Verify networks.
203 201
     9. Deploy cluster.
204 202
     10. Run OSTF.
205
-    11. Launch instance VM_1 with image TestVM,  nova availability zone and flavor m1.micro.
206
-    12. Launch instance VM_2  with image TestVM-VMDK,  vcenter availability zone and flavor m1.micro.
207
-    13. Check connection between instances, send ping from VM_1 to VM_2 and vice verse.
208
-    14. Reboot vcenter.
203
+    11. Launch instance VM_1 with image TestVM, nova availability zone and flavor m1.micro.
204
+    12. Launch instance VM_2 with image TestVM-VMDK, vcenter availability zone and flavor m1.micro.
205
+    13. Verify connection between instances: check that VM_1 and VM_2 can ping each other.
206
+    14. Reboot vCenter.
209 207
     15. Check that ComputeVMware lost connection with vCenter.
210 208
     16. Wait for vCenter.
211 209
     17. Ensure connectivity between instances.
@@ -261,14 +259,12 @@ Steps
261 259
     7. Configure VMware vCenter Settings. Add 2 vSphere clusters and configure Nova Compute instances on controllers.
262 260
     8. Verify networks.
263 261
     9. Deploy cluster.
264
-    10. Run OSTF
262
+    10. Run OSTF.
265 263
     11. Launch instances in nova and vcenter availability zones.
266
-    12. Verify connection between instances. Send ping.
267
-        Check that ping get reply.
268
-    13. Reset controller with  vmclusters services.
264
+    12. Verify connection between instances: check that instances can ping each other.
265
+    13. Reset controller with vmclusters services.
269 266
     14. Check that vmclusters services migrate to another controller.
270
-    15. Verify connection between instances.
271
-        Send ping, check that ping get reply.
267
+    15. Verify connection between instances: check that instances can ping each other.
272 268
 
273 269
 
274 270
 Expected result

+ 8
- 8
docs/test_plan/source/test_suite_smoke.rst View File

@@ -105,15 +105,15 @@ Steps
105 105
         * Storage: default
106 106
         * Additional services: default
107 107
     3. Go to Network tab -> Other subtab and check DVS plugin section is displayed with all required GUI elements:
108
-       'Neutron VMware DVS ML2 plugin' check box
109
-       "Use the VMware DVS firewall driver" check box
110
-       "Enter the cluster to dvSwitch mapping." text field with description 'List of ClusterName:SwitchName pairs, separated by semicolon. '
108
+       'Neutron VMware DVS ML2 plugin' checkbox
109
+       'Use the VMware DVS firewall driver' checkbox
110
+       'Enter the cluster to dvSwitch mapping.' text field with description 'List of ClusterName:SwitchName pairs, separated by semicolon.'
111 111
        'Versions' radio button with <plugin version>
112
-    4. Verify that check box "Neutron VMware DVS ML2 plugin" is enabled by default.
113
-    5. Verify that user can disable -> enable the DVS plugin by clicking on the checkbox “Neutron VMware DVS ML2 plugin”
114
-    6. Verify that check box "Use the VMware DVS firewall driver" is enabled by default.
112
+    4. Verify that checkbox 'Neutron VMware DVS ML2 plugin' is enabled by default.
113
+    5. Verify that user can disable/enable the DVS plugin by clicking on the checkbox 'Neutron VMware DVS ML2 plugin'.
114
+    6. Verify that checkbox 'Use the VMware DVS firewall driver' is enabled by default.
115 115
     7. Verify that all labels of the DVS plugin section have the same font style and color.
116
-    8. Verify that all elements of the DVS plugin section are vertically aligned
116
+    8. Verify that all elements of the DVS plugin section are vertically aligned.
117 117
 
118 118
 
119 119
 Expected result
@@ -184,7 +184,7 @@ dvs_vcenter_bvt
184 184
 Description
185 185
 ###########
186 186
 
187
-Check deployment with VMware DVS plugin, 3 Controllers, Compute, 2 CephOSD, CinderVMware and computeVMware roles.
187
+Check deployment with VMware DVS plugin, 3 Controllers, 3 Compute + CephOSD and CinderVMware + computeVMware roles.
188 188
 
189 189
 
190 190
 Complexity

+ 135
- 139
docs/test_plan/source/test_suite_system.rst View File

@@ -15,7 +15,7 @@ dvs_vcenter_systest_setup
15 15
 Description
16 16
 ###########
17 17
 
18
-Deploy environment in DualHypervisors mode with 3 controllers, 2 compute-vmware and 1 compute nodes. Nova Compute instances are running on controller nodes.
18
+Deploy environment in DualHypervisors mode with 1 controller, 1 compute-vmware and 2 compute nodes. Nova Compute instances are running on controller nodes.
19 19
 
20 20
 
21 21
 Complexity
@@ -86,7 +86,7 @@ Steps
86 86
     5. Remove private network net_01.
87 87
     6. Check that network net_01 is not present in the vSphere.
88 88
     7. Add private network net_01.
89
-    8. Check that networks is  present in the vSphere.
89
+    8. Check that networks is present in the vSphere.
90 90
 
91 91
 
92 92
 Expected result
@@ -160,14 +160,14 @@ Steps
160 160
 
161 161
     1. Set up for system tests.
162 162
     2. Log in to Horizon Dashboard.
163
-    3. Navigate to Project ->  Compute -> Instances
163
+    3. Navigate to Project -> Compute -> Instances
164 164
     4. Launch instance VM_1 with image TestVM, availability zone nova and flavor m1.micro.
165
-    5. Launch instance VM_2  with image TestVM-VMDK, availability zone  vcenter and flavor m1.micro.
166
-    6. Verify that instances  communicate between each other. Send icmp ping from VM_1 to VM_2  and vice versa.
165
+    5. Launch instance VM_2 with image TestVM-VMDK, availability zone vcenter and flavor m1.micro.
166
+    6. Verify that instances communicate between each other: check that VM_1 and VM_2 can ping each other.
167 167
     7. Disable interface of VM_1.
168
-    8. Verify that instances  don't communicate between each other. Send icmp ping from VM_2 to VM_1  and vice versa.
168
+    8. Verify that instances don't communicate between each other: check that VM_1 and VM_2 can not ping each other.
169 169
     9. Enable interface of VM_1.
170
-    10. Verify that instances  communicate between each other. Send icmp ping from VM_1 to VM_2  and vice versa.
170
+    10. Verify that instances communicate between each other: check that VM_1 and VM_2 can ping each other.
171 171
 
172 172
 
173 173
 Expected result
@@ -204,13 +204,13 @@ Steps
204 204
     1. Set up for system tests.
205 205
     2. Log in to Horizon Dashboard.
206 206
     3. Add two private networks (net01, and net02).
207
-    4. Add one  subnet (net01_subnet01: 192.168.101.0/24, net02_subnet01, 192.168.102.0/24) to each network.
207
+    4. Add one subnet (net01_subnet01: 192.168.101.0/24, net02_subnet01, 192.168.102.0/24) to each network.
208 208
     5. Launch instance VM_1 with image TestVM and flavor m1.micro in nova availability zone.
209
-    6. Launch instance VM_2  with image TestVM-VMDK and flavor m1.micro vcenter availability zone.
209
+    6. Launch instance VM_2 with image TestVM-VMDK and flavor m1.micro vcenter availability zone.
210 210
     7. Check abilities to assign multiple vNIC net01 and net02 to VM_1.
211 211
     8. Check abilities to assign multiple vNIC net01 and net02 to VM_2.
212 212
     9. Check that both interfaces on each instance have an IP address. To activate second interface on cirros edit the /etc/network/interfaces and restart network: "sudo /etc/init.d/S40network restart"
213
-    10. Send icmp ping from VM_1 to VM_2  and vice versa.
213
+    10. check that VM_1 and VM_2 can ping each other.
214 214
 
215 215
 
216 216
 Expected result
@@ -219,8 +219,8 @@ Expected result
219 219
 VM_1 and VM_2 should be attached to multiple vNIC net01 and net02. Pings should get a response.
220 220
 
221 221
 
222
-Check connection between instances  in one default tenant.
223
-----------------------------------------------------------
222
+Check connection between instances in one default tenant.
223
+---------------------------------------------------------
224 224
 
225 225
 
226 226
 ID
@@ -245,10 +245,10 @@ Steps
245 245
 #####
246 246
 
247 247
     1. Set up for system tests.
248
-    2. Navigate to Project ->  Compute -> Instances
248
+    2. Navigate to Project -> Compute -> Instances.
249 249
     3. Launch instance VM_1 with image TestVM and flavor m1.micro in nova availability zone.
250 250
     4. Launch instance VM_2 with image TestVM-VMDK and flavor m1.micro in vcenter availability zone.
251
-    5. Verify that VM_1 and VM_2 on different hypervisors  communicate between each other. Send icmp ping from VM_1 of vCenter to VM_2 from Qemu/KVM and vice versa.
251
+    5. Verify that VM_1 and VM_2 on different hypervisors communicate between each other: check that instances can ping each other.
252 252
 
253 253
 
254 254
 Expected result
@@ -285,10 +285,10 @@ Steps
285 285
     1. Set up for system tests.
286 286
     2. Log in to Horizon Dashboard.
287 287
     3. Create tenant net_01 with subnet.
288
-    4. Navigate to Project ->  Compute -> Instances
288
+    4. Navigate to Project -> Compute -> Instances.
289 289
     5. Launch instance VM_1 with image TestVM and flavor m1.micro in nova availability zone in net_01
290 290
     6. Launch instance VM_2 with image TestVM-VMDK and flavor m1.micro in vcenter availability zone in net_01
291
-    7. Verify that instances on same tenants communicate between each other. Send icmp ping from VM_1 to VM_2  and vice versa.
291
+    7. Verify that instances on same tenants communicate between each other. check that VM_1 and VM_2 can ping each other.
292 292
 
293 293
 
294 294
 Expected result
@@ -297,7 +297,7 @@ Expected result
297 297
 Pings should get a response.
298 298
 
299 299
 
300
-Check connectivity between instances attached to different networks with and within a router between them.
300
+Check connectivity between instances attached to different networks with and without a router between them.
301 301
 ----------------------------------------------------------------------------------------------------------
302 302
 
303 303
 
@@ -310,7 +310,7 @@ dvs_different_networks
310 310
 Description
311 311
 ###########
312 312
 
313
-Check connectivity between instances attached to different networks with and within a router between them.
313
+Check connectivity between instances attached to different networks with and without a router between them.
314 314
 
315 315
 
316 316
 Complexity
@@ -333,11 +333,11 @@ Steps
333 333
     9. Launch instances in the net02 with image TestVM and flavor m1.micro in nova az.
334 334
     10. Launch instances in the net02 with image TestVM-VMDK and flavor m1.micro in vcenter az.
335 335
     11. Verify that instances of same networks communicate between each other via private ip.
336
-         Send icmp ping between instances.
336
+        Check that instances can ping each other.
337 337
     12. Verify that instances of different networks don't communicate between each other via private ip.
338 338
     13. Delete net_02 from Router_02 and add it to the Router_01.
339 339
     14. Verify that instances of different networks communicate between each other via private ip.
340
-         Send icmp ping between instances.
340
+        Check that instances can ping each other.
341 341
 
342 342
 
343 343
 Expected result
@@ -375,15 +375,15 @@ Steps
375 375
     2. Log in to Horizon Dashboard.
376 376
     3. Create non-admin tenant with name 'test_tenant': Identity -> Projects-> Create Project. On tab Project Members add admin with admin and member.
377 377
     4. Navigate to Project -> Network -> Networks
378
-    5. Create network  with  subnet.
379
-    6. Navigate to Project ->  Compute -> Instances
380
-    7. Launch instance VM_1  with image TestVM-VMDK in the vcenter availability zone.
378
+    5. Create network with subnet.
379
+    6. Navigate to Project -> Compute -> Instances
380
+    7. Launch instance VM_1 with image TestVM-VMDK in the vcenter availability zone.
381 381
     8. Navigate to test_tenant.
382
-    9. Navigate to Project -> Network -> Networks
382
+    9. Navigate to Project -> Network -> Networks.
383 383
     10. Create Router, set gateway and add interface.
384
-    11. Navigate to Project ->  Compute -> Instances
384
+    11. Navigate to Project -> Compute -> Instances
385 385
     12. Launch instance VM_2 with image TestVM-VMDK in the vcenter availability zone.
386
-    13. Verify that instances on different tenants don't communicate between each other. Send icmp ping from VM_1 of admin tenant to VM_2  of test_tenant and vice versa.
386
+    13. Verify that instances on different tenants don't communicate between each other. Check that instances can not ping each other.
387 387
 
388 388
 
389 389
 Expected result
@@ -421,14 +421,14 @@ Steps
421 421
     2. Log in to Horizon Dashboard.
422 422
     3. Create net_01: net01_subnet, 192.168.112.0/24 and attach it to default router.
423 423
     4. Launch instance VM_1 of nova availability zone with image TestVM and flavor m1.micro in the default internal network.
424
-    5. Launch instance VM_2  of vcenter availability zone with image TestVM-VMDK and flavor m1.micro in the net_01.
425
-    6. Send ping from instances VM_1 and VM_2 to 8.8.8.8 or other outside ip.
424
+    5. Launch instance VM_2 of vcenter availability zone with image TestVM-VMDK and flavor m1.micro in the net_01.
425
+    6. Send icmp request from instances VM_1 and VM_2 to 8.8.8.8 or other outside ip and get related icmp reply.
426 426
 
427 427
 
428 428
 Expected result
429 429
 ###############
430 430
 
431
-Pings should  get a response
431
+Pings should get a response
432 432
 
433 433
 
434 434
 Check connectivity instances to public network with floating ip.
@@ -460,8 +460,8 @@ Steps
460 460
     2. Log in to Horizon Dashboard.
461 461
     3. Create net01: net01__subnet, 192.168.112.0/24 and attach it to the default router.
462 462
     4. Launch instance VM_1 of nova availability zone with image TestVM and flavor m1.micro in the default internal network. Associate floating ip.
463
-    5. Launch instance VM_2 of vcenter availability zone with image TestVM-VMDK  and flavor m1.micro in the net_01. Associate floating ip.
464
-    6. Send ping from instances VM_1 and VM_2 to 8.8.8.8 or other outside ip.
463
+    5. Launch instance VM_2 of vcenter availability zone with image TestVM-VMDK and flavor m1.micro in the net_01. Associate floating ip.
464
+    6. Send icmp request from instances VM_1 and VM_2 to 8.8.8.8 or other outside ip and get related icmp reply.
465 465
 
466 466
 
467 467
 Expected result
@@ -497,29 +497,29 @@ Steps
497 497
 
498 498
     1. Set up for system tests.
499 499
     2. Create non default network with subnet net_01.
500
-    3. Launch 2 instances  of vcenter availability zone and 2 instances of nova availability zone in the tenant network net_01
501
-    4. Launch 2 instances  of vcenter availability zone and 2 instances of nova availability zone in the internal tenant network.
500
+    3. Launch 2 instances of vcenter availability zone and 2 instances of nova availability zone in the tenant network net_01
501
+    4. Launch 2 instances of vcenter availability zone and 2 instances of nova availability zone in the internal tenant network.
502 502
     5. Attach net_01 to default router.
503 503
     6. Create security group SG_1 to allow ICMP traffic.
504 504
     7. Add Ingress rule for ICMP protocol to SG_1.
505 505
     8. Create security groups SG_2 to allow TCP traffic 22 port.
506 506
     9. Add Ingress rule for TCP protocol to SG_2.
507
-    10. Remove default security group and attach SG_1 and SG_2 to VMs
508
-    11. Check ping is available between instances.
507
+    10. Remove default security group and attach SG_1 and SG_2 to VMs.
508
+    11. Check that instances can ping each other.
509 509
     12. Check ssh connection is available between instances.
510 510
     13. Delete all rules from SG_1 and SG_2.
511
-    14. Check that ssh aren't available to instances.
511
+    14. Check that instances are not available via ssh.
512 512
     15. Add Ingress and egress rules for TCP protocol to SG_2.
513 513
     16. Check ssh connection is available between instances.
514
-    17. Check ping is not available between instances.
514
+    17. Check that instances can not ping each other.
515 515
     18. Add Ingress and egress rules for ICMP protocol to SG_1.
516
-    19. Check ping is available between instances.
516
+    19. Check that instances can ping each other.
517 517
     20. Delete Ingress rule for ICMP protocol from SG_1 (for OS cirros skip this step).
518 518
     21. Add Ingress rule for ICMP ipv6 to SG_1 (for OS cirros skip this step).
519 519
     22. Check ping6 is available between instances. (for OS cirros skip this step).
520 520
     23. Delete SG1 and SG2 security groups.
521 521
     24. Attach instances to default security group.
522
-    25. Check ping is available between instances.
522
+    25. Check that instances can ping each other.
523 523
     26. Check ssh is available between instances.
524 524
 
525 525
 
@@ -556,7 +556,7 @@ Steps
556 556
 
557 557
     1. Set up for system tests.
558 558
     2. Log in to Horizon Dashboard.
559
-    3. Launch 2 instances on each  hypervisors.
559
+    3. Launch 2 instances on each hypervisor (one in vcenter az and another one in nova az).
560 560
     4. Verify that traffic can be successfully sent from and received on the MAC and IP address associated with the logical port.
561 561
     5. Configure a new IP address on the instance associated with the logical port.
562 562
     6. Confirm that the instance cannot communicate with that IP address.
@@ -616,7 +616,7 @@ Steps
616 616
         * network: net1 with ip 10.0.0.5
617 617
         * SG: SG_1
618 618
     10. In tenant 'test_2' create net2 and subnet2 with CIDR 10.0.0.0/24.
619
-    11. In tenant 'test_2' Create Router 'router_2' with external floating network
619
+    11. In tenant 'test_2' create Router 'router_2' with external floating network
620 620
     12. In tenant 'test_2' attach interface of net2, subnet2 to router_2
621 621
     13. In tenant "test_2" create security group "SG_2" and add rule that allows ingress icmp traffic.
622 622
     14. In tenant "test_2" launch instance:
@@ -633,16 +633,16 @@ Steps
633 633
          * flavor: m1.micro
634 634
          * network: net2 with ip 10.0.0.5
635 635
          * SG: SG_2
636
-    16. Assign floating ips for each instance
637
-    17. Check instances in tenant_1 communicate between each other by internal ip
638
-    18. Check instances in tenant_2 communicate between each other by internal ip
636
+    16. Assign floating ips for each instance.
637
+    17. Check instances in tenant_1 communicate between each other by internal ip.
638
+    18. Check instances in tenant_2 communicate between each other by internal ip.
639 639
     19. Check instances in different tenants communicate between each other by floating ip.
640 640
 
641 641
 
642 642
 Expected result
643 643
 ###############
644 644
 
645
-Pings should  get a response.
645
+Pings should get a response.
646 646
 
647 647
 
648 648
 Check creation instance in the one group simultaneously.
@@ -671,11 +671,11 @@ Steps
671 671
 #####
672 672
 
673 673
     1. Set up for system tests.
674
-    2. Navigate to Project -> Compute -> Instances
674
+    2. Navigate to Project -> Compute -> Instances.
675 675
     3. Launch few instances simultaneously with image TestVM and flavor m1.micro in nova availability zone in default internal network.
676
-    4. Launch few instances simultaneously with image TestVM-VMDK and flavor m1.micro in vcenter availability zone in  default internal network.
676
+    4. Launch few instances simultaneously with image TestVM-VMDK and flavor m1.micro in vcenter availability zone in default internal network.
677 677
     5. Check connection between instances (ping, ssh).
678
-    6. Delete all instances from horizon simultaneously.
678
+    6. Delete all instances from Horizon simultaneously.
679 679
 
680 680
 
681 681
 Expected result
@@ -726,7 +726,7 @@ Steps
726 726
     7. Configure VMware vCenter Settings. Add 1 vSphere clusters and configure Nova Compute instances on controllers.
727 727
     8. Verify networks.
728 728
     9. Deploy cluster.
729
-    10. Create  instances for each of hypervisor's type
729
+    10. Create instances for each of hypervisor's type
730 730
     11. Create 2 volumes each in his own availability zone.
731 731
     12. Attach each volume to his instance.
732 732
 
@@ -800,17 +800,19 @@ Steps
800 800
     1. Upload plugins to the master node.
801 801
     2. Install plugin.
802 802
     3. Create cluster with vcenter.
803
-    4. Set CephOSD as backend for Glance and Cinder
803
+    4. Set CephOSD as backend for Glance and Cinder.
804 804
     5. Add nodes with following roles:
805
-                       controller
806
-                       compute-vmware
807
-                       compute-vmware
808
-                       compute
809
-                       3 ceph-osd
805
+           * Controller
806
+           * Compute-VMware
807
+           * Compute-VMware
808
+           * Compute
809
+           * Ceph-OSD
810
+           * Ceph-OSD
811
+           * Ceph-OSD
810 812
     6. Upload network template.
811 813
     7. Check network configuration.
812
-    8. Deploy the cluster
813
-    9. Run OSTF
814
+    8. Deploy the cluster.
815
+    9. Run OSTF.
814 816
 
815 817
 
816 818
 Expected result
@@ -832,8 +834,7 @@ dvs_vcenter_remote_sg
832 834
 Description
833 835
 ###########
834 836
 
835
-Verify that network traffic is allowed/prohibited to instances according security groups
836
-rules.
837
+Verify that network traffic is allowed/prohibited to instances according security groups rules.
837 838
 
838 839
 
839 840
 Complexity
@@ -859,41 +860,41 @@ Steps
859 860
        SG_man
860 861
        SG_DNS
861 862
     6. Add rules to SG_web:
862
-       Ingress rule with ip protocol 'http' , port range 80-80, ip range 0.0.0.0/0
863
-       Ingress rule with ip protocol 'tcp' , port range 3306-3306, SG group 'SG_db'
864
-       Ingress rule with ip protocol 'tcp' , port range 22-22, SG group 'SG_man
865
-       Engress rule with ip protocol 'http' , port range 80-80, ip range 0.0.0.0/0
866
-       Egress rule with ip protocol 'tcp' , port range 3306-3306, SG group 'SG_db'
867
-       Egress rule with ip protocol 'tcp' , port range 22-22, SG group 'SG_man
863
+       Ingress rule with ip protocol 'http', port range 80-80, ip range 0.0.0.0/0
864
+       Ingress rule with ip protocol 'tcp', port range 3306-3306, SG group 'SG_db'
865
+       Ingress rule with ip protocol 'tcp', port range 22-22, SG group 'SG_man
866
+       Egress rule with ip protocol 'http', port range 80-80, ip range 0.0.0.0/0
867
+       Egress rule with ip protocol 'tcp', port range 3306-3306, SG group 'SG_db'
868
+       Egress rule with ip protocol 'tcp', port range 22-22, SG group 'SG_man
868 869
     7. Add rules to SG_db:
869
-       Egress rule with ip protocol 'http' , port range 80-80, ip range 0.0.0.0/0
870
-       Egress rule with ip protocol 'https ' , port range 443-443, ip range 0.0.0.0/0
871
-       Ingress rule with ip protocol 'http' , port range 80-80, ip range 0.0.0.0/0
872
-       Ingress rule with ip protocol 'https ' , port range 443-443, ip range 0.0.0.0/0
873
-       Ingress rule with ip protocol 'tcp' , port range 3306-3306, SG group 'SG_web'
874
-       Ingress rule with ip protocol 'tcp' , port range 22-22, SG group 'SG_man'
875
-       Egress rule with ip protocol 'tcp' , port range 3306-3306, SG group 'SG_web'
876
-       Egress rule with ip protocol 'tcp' , port range 22-22, SG group 'SG_man'
870
+       Egress rule with ip protocol 'http', port range 80-80, ip range 0.0.0.0/0
871
+       Egress rule with ip protocol 'https', port range 443-443, ip range 0.0.0.0/0
872
+       Ingress rule with ip protocol 'http', port range 80-80, ip range 0.0.0.0/0
873
+       Ingress rule with ip protocol 'https', port range 443-443, ip range 0.0.0.0/0
874
+       Ingress rule with ip protocol 'tcp', port range 3306-3306, SG group 'SG_web'
875
+       Ingress rule with ip protocol 'tcp', port range 22-22, SG group 'SG_man'
876
+       Egress rule with ip protocol 'tcp', port range 3306-3306, SG group 'SG_web'
877
+       Egress rule with ip protocol 'tcp', port range 22-22, SG group 'SG_man'
877 878
     8. Add rules to SG_DNS:
878
-       Ingress rule with ip protocol 'udp ' , port range 53-53, ip-prefix 'ip DNS server'
879
-       Egress rule with ip protocol 'udp ' , port range 53-53, ip-prefix 'ip DNS server'
880
-       Ingress rule with ip protocol 'tcp' , port range 53-53, ip-prefix 'ip DNS server'
881
-       Egress rule with ip protocol 'tcp' , port range 53-53, ip-prefix 'ip DNS server'
879
+       Ingress rule with ip protocol 'udp', port range 53-53, ip-prefix 'ip DNS server'
880
+       Egress rule with ip protocol 'udp', port range 53-53, ip-prefix 'ip DNS server'
881
+       Ingress rule with ip protocol 'tcp', port range 53-53, ip-prefix 'ip DNS server'
882
+       Egress rule with ip protocol 'tcp', port range 53-53, ip-prefix 'ip DNS server'
882 883
     9. Add rules to SG_man:
883
-       Ingress rule with ip protocol 'tcp' , port range 22-22, ip range 0.0.0.0/0
884
-       Egress rule with ip protocol 'tcp' , port range 22-22, ip range 0.0.0.0/0
884
+       Ingress rule with ip protocol 'tcp', port range 22-22, ip range 0.0.0.0/0
885
+       Egress rule with ip protocol 'tcp', port range 22-22, ip range 0.0.0.0/0
885 886
     10. Launch following instances in net_1 from image 'ubuntu':
886 887
         instance 'webserver' of vcenter az with SG_web, SG_DNS
887 888
         instance 'mysqldb ' of vcenter az with SG_db, SG_DNS
888 889
         instance 'manage' of nova az with SG_man, SG_DNS
889
-    11. Verify that  traffic is enabled to instance 'webserver' from internet by http port 80.
890
-    12. Verify that  traffic is enabled to instance 'webserver' from VM 'manage' by tcp port 22.
890
+    11. Verify that traffic is enabled to instance 'webserver' from external network by http port 80.
891
+    12. Verify that traffic is enabled to instance 'webserver' from VM 'manage' by tcp port 22.
891 892
     13. Verify that traffic is enabled to instance 'webserver' from VM 'mysqldb' by tcp port 3306.
892
-    14. Verify that traffic is enabled to internet from instance ' mysqldb' by https port 443.
893
-    15. Verify that traffic is enabled to instance ' mysqldb' from VM 'manage' by tcp port 22.
894
-    16. Verify that traffic is enabled to instance ' manage' from internet by tcp port 22.
895
-    17. Verify that traffic is not enabled to instance ' webserver' from internet by tcp port 22.
896
-    18. Verify that traffic is not enabled to instance ' mysqldb' from internet by tcp port 3306.
893
+    14. Verify that traffic is enabled to internet from instance 'mysqldb' by https port 443.
894
+    15. Verify that traffic is enabled to instance 'mysqldb' from VM 'manage' by tcp port 22.
895
+    16. Verify that traffic is enabled to instance 'manage' from internet by tcp port 22.
896
+    17. Verify that traffic is not enabled to instance 'webserver' from internet by tcp port 22.
897
+    18. Verify that traffic is not enabled to instance 'mysqldb' from internet by tcp port 3306.
897 898
     19. Verify that traffic is not enabled to instance 'manage' from internet by http port 80.
898 899
     20. Verify that traffic is enabled to all instances from DNS server by udp/tcp port 53 and vice versa.
899 900
 
@@ -901,8 +902,7 @@ Steps
901 902
 Expected result
902 903
 ###############
903 904
 
904
-Network traffic is allowed/prohibited to instances according security groups
905
-rules.
905
+Network traffic is allowed/prohibited to instances according security groups rules.
906 906
 
907 907
 
908 908
 Security group rules with remote group id simple.
@@ -918,8 +918,7 @@ dvs_remote_sg_simple
918 918
 Description
919 919
 ###########
920 920
 
921
-Verify that network traffic is allowed/prohibited to instances according security groups
922
-rules.
921
+Verify that network traffic is allowed/prohibited to instances according security groups rules.
923 922
 
924 923
 
925 924
 Complexity
@@ -947,16 +946,15 @@ Steps
947 946
        Launch 2 instance of nova az with SG1 in net1.
948 947
     8. Launch 2 instance of vcenter az with SG2 in net1.
949 948
        Launch 2 instance of nova az with SG2 in net1.
950
-    9. Verify that icmp ping is enabled between VMs from SG1.
951
-    10. Verify that icmp ping is enabled between instances from SG2.
952
-    11. Verify that icmp ping is not enabled between instances from SG1 and VMs from SG2.
949
+    9. Check that instances from SG1 can ping each other.
950
+    10. Check that instances from SG2 can ping each other.
951
+    11. Check that instances from SG1 can not ping instances from SG2 and vice versa.
953 952
 
954 953
 
955 954
 Expected result
956 955
 ###############
957 956
 
958
-Network traffic is allowed/prohibited to instances according security groups
959
-rules.
957
+Network traffic is allowed/prohibited to instances according security groups rules.
960 958
 
961 959
 
962 960
 Check attached/detached ports with security groups.
@@ -1007,8 +1005,7 @@ Steps
1007 1005
 Expected result
1008 1006
 ###############
1009 1007
 
1010
-Verify that network traffic is allowed/prohibited to instances according security groups
1011
-rules.
1008
+Verify that network traffic is allowed/prohibited to instances according security groups rules.
1012 1009
 
1013 1010
 
1014 1011
 Check launch and remove instances in the one group simultaneously with few security groups.
@@ -1043,23 +1040,23 @@ Steps
1043 1040
        Egress rule with ip protocol 'icmp', port range any, SG group 'SG1'
1044 1041
        Ingress rule with ssh protocol 'tcp', port range 22, SG group 'SG1'
1045 1042
        Egress rule with ssh protocol 'tcp', port range 22, SG group 'SG1'
1046
-    4. Create security Sg2 group with rules:
1043
+    4. Create security SG2 group with rules:
1047 1044
        Ingress rule with ssh protocol 'tcp', port range 22, SG group 'SG2'
1048 1045
        Egress rule with ssh protocol 'tcp', port range 22, SG group 'SG2'
1049
-    5. Launch a few instances of vcenter availability zone with Default SG +SG1+SG2  in net1 in one batch.
1050
-    6. Launch a few instances of nova availability zone with Default SG +SG1+SG2  in net1 in one batch.
1046
+    5. Launch a few instances of vcenter availability zone with Default SG + SG1 + SG2 in net_1 in one batch.
1047
+    6. Launch a few instances of nova availability zone with Default SG + SG1 + SG2 in net_1 in one batch.
1051 1048
     7. Verify that icmp/ssh is enabled between instances.
1052 1049
     8. Remove all instances.
1053
-    9. Launch a few instances of nova availability zone with Default SG +SG1+SG2  in net1 in one batch.
1054
-    10. Launch a few instances of vcenter availability zone with Default SG +SG1+SG2  in net1 in one batch.
1050
+    9. Launch a few instances of nova availability zone with Default SG + SG1 + SG2 in net_1 in one batch.
1051
+    10. Launch a few instances of vcenter availability zone with Default SG + SG1 + SG2 in net_1 in one batch.
1055 1052
     11. Verify that icmp/ssh is enabled between instances.
1053
+    12. Remove all instances.
1056 1054
 
1057 1055
 
1058 1056
 Expected result
1059 1057
 ###############
1060 1058
 
1061
-Verify that network traffic is allowed/prohibited to instances according security groups
1062
-rules.
1059
+Verify that network traffic is allowed/prohibited to instances according security groups rules.
1063 1060
 
1064 1061
 
1065 1062
 Security group rules with remote ip prefix.
@@ -1102,18 +1099,17 @@ Steps
1102 1099
        Ingress rule with ip protocol 'tcp', port range any, <internal ip of VM2>
1103 1100
        Egress rule with ip protocol 'tcp', port range any, <internal ip of VM2>
1104 1101
     9. Launch 2 instance 'VM3' and 'VM4' of vcenter az with SG1 and SG2 in net1.
1105
-       Launch 2 instance 'VM5' and 'VM6'  of nova az with SG1 and SG2 in net1.
1106
-    10. Verify that icmp ping is enabled from 'VM3',  'VM4' ,  'VM5' and 'VM6'  to VM1 and vice versa.
1107
-    11. Verify that icmp ping is blocked between 'VM3',  'VM4' ,  'VM5' and 'VM6' and vice versa.
1108
-    12. Verify that ssh is enabled from 'VM3',  'VM4' ,  'VM5' and 'VM6'  to VM2 and vice versa.
1109
-    13. Verify that ssh is blocked between 'VM3',  'VM4' ,  'VM5' and 'VM6' and vice versa.
1102
+       Launch 2 instance 'VM5' and 'VM6' of nova az with SG1 and SG2 in net1.
1103
+    10. Check that instances 'VM3', 'VM4', 'VM5' and 'VM6' can ping VM1 and vice versa.
1104
+    11. Check that instances 'VM3', 'VM4', 'VM5' and 'VM6' can not ping each other Verify that icmp ping is blocked between and vice versa.
1105
+    12. Verify that ssh is enabled from 'VM3', 'VM4', 'VM5' and 'VM6' to VM2 and vice versa.
1106
+    13. Verify that ssh is blocked between 'VM3', 'VM4', 'VM5' and 'VM6' and vice versa.
1110 1107
 
1111 1108
 
1112 1109
 Expected result
1113 1110
 ###############
1114 1111
 
1115
-Verify that network traffic is allowed/prohibited to instances according security groups
1116
-rules.
1112
+Verify that network traffic is allowed/prohibited to instances according security groups rules.
1117 1113
 
1118 1114
 
1119 1115
 Fuel create mirror and update core repos on cluster with DVS
@@ -1143,14 +1139,14 @@ Steps
1143 1139
 
1144 1140
     1. Setup for system tests
1145 1141
     2. Log into controller node via Fuel CLI and get PID of services which were
1146
-        launched by plugin and store them.
1142
+       launched by plugin and store them.
1147 1143
     3. Launch the following command on the Fuel Master node:
1148
-        `fuel-mirror create -P ubuntu -G mos ubuntu`
1144
+       `fuel-mirror create -P ubuntu -G mos ubuntu`
1149 1145
     4. Run the command below on the Fuel Master node:
1150
-        `fuel-mirror apply -P ubuntu -G mos ubuntu --env <env_id> --replace`
1146
+       `fuel-mirror apply -P ubuntu -G mos ubuntu --env <env_id> --replace`
1151 1147
     5. Run the command below on the Fuel Master node:
1152
-        `fuel --env <env_id> node --node-id <node_ids_separeted_by_coma> --tasks setup_repositories`
1153
-        And wait until task is done.
1148
+       `fuel --env <env_id> node --node-id <node_ids_separeted_by_coma> --tasks setup_repositories`
1149
+       And wait until task is done.
1154 1150
     6. Log into controller node and check plugins services are alive and their PID are not changed.
1155 1151
     7. Check all nodes remain in ready status.
1156 1152
     8. Rerun OSTF.
@@ -1163,8 +1159,8 @@ Cluster (nodes) should remain in ready state.
1163 1159
 OSTF test should be passed on rerun
1164 1160
 
1165 1161
 
1166
-Modifying env with DVS plugin(removing/adding controller)
1167
----------------------------------------------------------
1162
+Modifying env with DVS plugin (removing/adding controller)
1163
+----------------------------------------------------------
1168 1164
 
1169 1165
 ID
1170 1166
 ##
@@ -1186,7 +1182,7 @@ core
1186 1182
 Steps
1187 1183
 #####
1188 1184
 
1189
-    1. Install DVS plugin
1185
+    1. Install DVS plugin.
1190 1186
     2. Create a new environment with following parameters:
1191 1187
         * Compute: KVM/QEMU with vCenter
1192 1188
         * Networking: Neutron with VLAN segmentation + Neutron with DVS
@@ -1202,15 +1198,15 @@ Steps
1202 1198
     5. Configure DVS plugin.
1203 1199
     6. Configure VMware vCenter Settings.
1204 1200
     7. Verify networks.
1205
-    8. Deploy changes
1206
-    9. Run OSTF
1201
+    8. Deploy changes.
1202
+    9. Run OSTF.
1207 1203
     10. Remove controller on which DVS agent is run.
1208
-    11. Deploy changes
1209
-    12. Rerun OSTF
1210
-    13. Add 1 nodes with controller role to the cluster
1211
-    14. Verify networks
1212
-    15. Redeploy changes
1213
-    16. Rerun OSTF
1204
+    11. Deploy changes.
1205
+    12. Rerun OSTF.
1206
+    13. Add 1 nodes with controller role to the cluster.
1207
+    14. Verify networks.
1208
+    15. Redeploy changes.
1209
+    16. Rerun OSTF.
1214 1210
 
1215 1211
 Expected result
1216 1212
 ###############
@@ -1242,14 +1238,14 @@ Steps
1242 1238
 #####
1243 1239
 
1244 1240
     1. Set up for system tests.
1245
-    2. Remove compute from the cluster
1246
-    3. Verify networks
1247
-    4. Deploy changes
1248
-    5. Rerun OSTF
1249
-    6. Add 1 node with compute role to the cluster
1250
-    7. Verify networks
1251
-    8. Redeploy changes
1252
-    9. Rerun OSTF
1241
+    2. Remove compute from the cluster.
1242
+    3. Verify networks.
1243
+    4. Deploy changes.
1244
+    5. Rerun OSTF.
1245
+    6. Add 1 node with compute role to the cluster.
1246
+    7. Verify networks.
1247
+    8. Redeploy changes.
1248
+    9. Rerun OSTF.
1253 1249
 
1254 1250
 Expected result
1255 1251
 ###############
@@ -1280,7 +1276,7 @@ core
1280 1276
 Steps
1281 1277
 #####
1282 1278
 
1283
-    1. Install DVS plugin
1279
+    1. Install DVS plugin.
1284 1280
     2. Create a new environment with following parameters:
1285 1281
         * Compute: KVM/QEMU with vCenter
1286 1282
         * Networking: Neutron with VLAN segmentation
@@ -1297,10 +1293,10 @@ Steps
1297 1293
     8. Add 1 node with compute-vmware role,configure Nova Compute instance on compute-vmware and redeploy cluster.
1298 1294
     9. Verify that previously created instance is working.
1299 1295
     10. Run OSTF tests.
1300
-    11. Delete compute-vmware
1301
-    12. Redeploy changes
1296
+    11. Delete compute-vmware.
1297
+    12. Redeploy changes.
1302 1298
     13. Verify that previously created instance is working.
1303
-    14. Run OSTF
1299
+    14. Run OSTF.
1304 1300
 
1305 1301
 Expected result
1306 1302
 ###############

+ 21
- 21
docs/test_plan/source/vmware_dvs_test_plan.rst View File

@@ -117,33 +117,33 @@ Target Test Items
117 117
 * Install/uninstall Fuel Vmware-DVS plugin
118 118
 * Deploy Cluster with Fuel Vmware-DVS plugin by Fuel
119 119
     * Roles of nodes
120
-        * controller
121
-        * compute
122
-        * cinder
123
-        * mongo
124
-        * compute-vmware
125
-        * cinder-vmware
120
+        * Controller
121
+        * Compute
122
+        * Cinder
123
+        * Mongo
124
+        * Compute-VMware
125
+        * Cinder-VMware
126 126
     * Hypervisors:
127
-        * KVM+Vcenter
128
-        * Qemu+Vcenter
127
+        * KVM + vCenter
128
+        * Qemu + vCenter
129 129
     * Storage:
130 130
         * Ceph
131 131
         * Cinder
132 132
         * VMWare vCenter/ESXi datastore for images
133 133
     * Network
134
-        * Neutron with Vlan segmentation
134
+        * Neutron with VLAN segmentation
135 135
         * HA + Neutron with VLAN
136 136
     * Additional components
137 137
         * Ceilometer
138 138
         * Health Check
139 139
     * Upgrade master node
140 140
 * MOS and VMware-DVS plugin
141
-    * Computes(Nova)
141
+    * Computes (Nova)
142 142
         * Launch and manage instances
143 143
         * Launch instances in batch
144 144
     * Networks (Neutron)
145
-        * Create and manage public and private networks.
146
-        * Create and manage routers.
145
+        * Create and manage public and private networks
146
+        * Create and manage routers
147 147
         * Port binding / disabling
148 148
         * Port security
149 149
         * Security groups
@@ -158,7 +158,7 @@ Target Test Items
158 158
         * Create and manage projects
159 159
         * Create and manage users
160 160
     * Glance
161
-        * Create  and manage images
161
+        * Create and manage images
162 162
 * GUI
163 163
     * Fuel UI
164 164
 * CLI
@@ -168,13 +168,13 @@ Target Test Items
168 168
 Test approach
169 169
 *************
170 170
 
171
-The project test approach consists of Smoke,  Integration, System, Regression
172
-Failover and Acceptance  test levels.
171
+The project test approach consists of Smoke, Integration, System, Regression
172
+Failover and Acceptance test levels.
173 173
 
174 174
 **Smoke testing**
175 175
 
176 176
 The goal of smoke testing is to ensure that the most critical features of Fuel
177
-VMware DVS plugin work  after new build delivery. Smoke tests will be used by
177
+VMware DVS plugin work after new build delivery. Smoke tests will be used by
178 178
 QA to accept software builds from Development team.
179 179
 
180 180
 **Integration and System testing**
@@ -185,8 +185,8 @@ without gaps in dataflow.
185 185
 
186 186
 **Regression testing**
187 187
 
188
-The goal of regression testing is to verify that key features of  Fuel VMware
189
-DVS plugin  are not affected by any changes performed during preparation to
188
+The goal of regression testing is to verify that key features of Fuel VMware
189
+DVS plugin are not affected by any changes performed during preparation to
190 190
 release (includes defects fixing, new features introduction and possible
191 191
 updates).
192 192
 
@@ -199,7 +199,7 @@ malfunctions with undue loss of data or data integrity.
199 199
 **Acceptance testing**
200 200
 
201 201
 The goal of acceptance testing is to ensure that Fuel VMware DVS plugin has
202
-reached a level of stability that meets requirements  and acceptance criteria.
202
+reached a level of stability that meets requirements and acceptance criteria.
203 203
 
204 204
 
205 205
 ***********************
@@ -256,7 +256,7 @@ Project testing activities are to be resulted in the following reporting documen
256 256
 Acceptance criteria
257 257
 ===================
258 258
 
259
-* All acceptance criteria for user stories are met.
259
+* All acceptance criteria for user stories are met
260 260
 * All test cases are executed. BVT tests are passed
261 261
 * Critical and high issues are fixed
262 262
 * All required documents are delivered
@@ -268,4 +268,4 @@ Test cases
268 268
 
269 269
 .. include:: test_suite_smoke.rst
270 270
 .. include:: test_suite_system.rst
271
-.. include:: test_suite_failover.rst
271
+.. include:: test_suite_failover.rst

+ 108
- 130
plugin_test/tests/test_plugin_vmware_dvs_destructive.py View File

@@ -40,7 +40,7 @@ TestBasic = fuelweb_test.tests.base_test_case.TestBasic
40 40
 SetupEnvironment = fuelweb_test.tests.base_test_case.SetupEnvironment
41 41
 
42 42
 
43
-@test(groups=["plugins", 'dvs_vcenter_plugin', 'dvs_vcenter_system',
43
+@test(groups=['plugins', 'dvs_vcenter_plugin', 'dvs_vcenter_system',
44 44
               'dvs_vcenter_destructive'])
45 45
 class TestDVSDestructive(TestBasic):
46 46
     """Failover test suite.
@@ -59,19 +59,17 @@ class TestDVSDestructive(TestBasic):
59 59
     cmds = ['nova-manage service list | grep vcenter-vmcluster1',
60 60
             'nova-manage service list | grep vcenter-vmcluster2']
61 61
 
62
-    networks = [
63
-        {'name': 'net_1',
64
-         'subnets': [
65
-             {'name': 'subnet_1',
66
-              'cidr': '192.168.112.0/24'}
67
-         ]
68
-         },
69
-        {'name': 'net_2',
70
-         'subnets': [
71
-             {'name': 'subnet_1',
72
-              'cidr': '192.168.113.0/24'}
73
-         ]
74
-         }
62
+    networks = [{
63
+        'name': 'net_1',
64
+        'subnets': [
65
+            {'name': 'subnet_1',
66
+             'cidr': '192.168.112.0/24'}
67
+        ]}, {
68
+        'name': 'net_2',
69
+        'subnets': [
70
+            {'name': 'subnet_1',
71
+             'cidr': '192.168.113.0/24'}
72
+        ]}
75 73
     ]
76 74
 
77 75
     # defaults
@@ -90,47 +88,47 @@ class TestDVSDestructive(TestBasic):
90 88
 
91 89
         :param openstack_ip: type string, openstack ip
92 90
         """
93
-        admin = os_actions.OpenStackActions(
91
+        os_conn = os_actions.OpenStackActions(
94 92
             openstack_ip, SERVTEST_USERNAME,
95 93
             SERVTEST_PASSWORD,
96 94
             SERVTEST_TENANT)
97 95
 
98
-        # create security group with rules for ssh and ping
99
-        security_group = admin.create_sec_group_for_ssh()
96
+        # Create security group with rules for ssh and ping
97
+        security_group = os_conn.create_sec_group_for_ssh()
100 98
 
101
-        default_sg = [
102
-            sg
103
-            for sg in admin.neutron.list_security_groups()['security_groups']
104
-            if sg['tenant_id'] == admin.get_tenant(SERVTEST_TENANT).id
105
-            if sg['name'] == 'default'][0]
99
+        _sec_groups = os_conn.neutron.list_security_groups()['security_groups']
100
+        _serv_tenant_id = os_conn.get_tenant(SERVTEST_TENANT).id
101
+        default_sg = [sg for sg in _sec_groups
102
+                      if sg['tenant_id'] == _serv_tenant_id and
103
+                      sg['name'] == 'default'][0]
106 104
 
107
-        network = admin.nova.networks.find(label=self.inter_net_name)
105
+        network = os_conn.nova.networks.find(label=self.inter_net_name)
108 106
 
109
-        # create access point server
110
-        access_point, access_point_ip = openstack.create_access_point(
111
-            os_conn=admin, nics=[{'net-id': network.id}],
107
+        # Create access point server
108
+        _, access_point_ip = openstack.create_access_point(
109
+            os_conn=os_conn,
110
+            nics=[{'net-id': network.id}],
112 111
             security_groups=[security_group.name, default_sg['name']])
113 112
 
114 113
         self.show_step(11)
115 114
         self.show_step(12)
116 115
         instances = openstack.create_instances(
117
-            os_conn=admin, nics=[{'net-id': network.id}],
116
+            os_conn=os_conn,
117
+            nics=[{'net-id': network.id}],
118 118
             vm_count=1,
119 119
             security_groups=[default_sg['name']])
120
-        openstack.verify_instance_state(admin)
120
+        openstack.verify_instance_state(os_conn)
121 121
 
122 122
         # Get private ips of instances
123
-        ips = []
124
-        for instance in instances:
125
-            ips.append(admin.get_nova_instance_ip(
126
-                instance, net_name=self.inter_net_name))
123
+        ips = [os_conn.get_nova_instance_ip(i, net_name=self.inter_net_name)
124
+               for i in instances]
127 125
         time.sleep(30)
128 126
         self.show_step(13)
129 127
         openstack.ping_each_other(ips=ips, access_point_ip=access_point_ip)
130 128
 
131 129
         self.show_step(14)
132
-        vcenter_name = [
133
-            name for name in self.WORKSTATION_NODES if 'vcenter' in name].pop()
130
+        vcenter_name = [name for name in self.WORKSTATION_NODES
131
+                        if 'vcenter' in name].pop()
134 132
         node = vmrun.Vmrun(
135 133
             self.host_type,
136 134
             self.path_to_vmx_file.format(vcenter_name),
@@ -143,13 +141,13 @@ class TestDVSDestructive(TestBasic):
143 141
         wait(lambda: not icmp_ping(self.VCENTER_IP),
144 142
              interval=1,
145 143
              timeout=10,
146
-             timeout_msg='Vcenter is still available.')
144
+             timeout_msg='vCenter is still available.')
147 145
 
148 146
         self.show_step(16)
149 147
         wait(lambda: icmp_ping(self.VCENTER_IP),
150 148
              interval=5,
151 149
              timeout=120,
152
-             timeout_msg='Vcenter is not available.')
150
+             timeout_msg='vCenter is not available.')
153 151
 
154 152
         self.show_step(17)
155 153
         openstack.ping_each_other(ips=ips, access_point_ip=access_point_ip)
@@ -163,7 +161,7 @@ class TestDVSDestructive(TestBasic):
163 161
         Scenario:
164 162
             1. Revert snapshot to dvs_vcenter_systest_setup.
165 163
             2. Try to uninstall dvs plugin.
166
-            3. Check that plugin is not removed
164
+            3. Check that plugin is not removed.
167 165
 
168 166
         Duration: 1.8 hours
169 167
 
@@ -178,12 +176,13 @@ class TestDVSDestructive(TestBasic):
178 176
         self.ssh_manager.execute_on_remote(
179 177
             ip=self.ssh_manager.admin_ip,
180 178
             cmd=cmd,
181
-            assert_ec_equal=[1]
182
-        )
179
+            assert_ec_equal=[1])
183 180
 
184 181
         self.show_step(3)
185 182
         output = self.ssh_manager.execute_on_remote(
186
-            ip=self.ssh_manager.admin_ip, cmd='fuel plugins list')['stdout']
183
+            ip=self.ssh_manager.admin_ip,
184
+            cmd='fuel plugins list'
185
+        )['stdout']
187 186
         assert_true(plugin.plugin_name in output[-1].split(' '),
188 187
                     "Plugin '{0}' was removed".format(plugin.plugin_name))
189 188
 
@@ -194,19 +193,18 @@ class TestDVSDestructive(TestBasic):
194 193
         """Check abilities to bind port on DVS to VM, disable/enable this port.
195 194
 
196 195
         Scenario:
197
-            1. Revert snapshot to dvs_vcenter_systest_setup
196
+            1. Revert snapshot to dvs_vcenter_systest_setup.
198 197
             2. Create private networks net01 with subnet.
199
-            3. Launch instance VM_1 in the net01
200
-               with image TestVM and flavor m1.micro in nova az.
201
-            4. Launch instance VM_2 in the net01
202
-               with image TestVM-VMDK and flavor m1.micro in vcenter az.
198
+            3. Launch instance VM_1 in the net01 with
199
+               image TestVM and flavor m1.micro in nova az.
200
+            4. Launch instance VM_2 in the net01 with
201
+               image TestVM-VMDK and flavor m1.micro in vcenter az.
203 202
             5. Disable sub_net port of instances.
204 203
             6. Check instances are not available.
205 204
             7. Enable sub_net port of all instances.
206 205
             8. Verify that instances communicate between each other.
207 206
                Send icmp ping between instances.
208 207
 
209
-
210 208
         Duration: 1,5 hours
211 209
 
212 210
         """
@@ -221,22 +219,20 @@ class TestDVSDestructive(TestBasic):
221 219
             SERVTEST_PASSWORD,
222 220
             SERVTEST_TENANT)
223 221
 
224
-        # create security group with rules for ssh and ping
222
+        # Create security group with rules for ssh and ping
225 223
         security_group = os_conn.create_sec_group_for_ssh()
226 224
 
227 225
         self.show_step(2)
228 226
         net = self.networks[0]
229
-        network = os_conn.create_network(network_name=net['name'])['network']
227
+        net_1 = os_conn.create_network(network_name=net['name'])['network']
230 228
 
231 229
         subnet = os_conn.create_subnet(
232 230
             subnet_name=net['subnets'][0]['name'],
233
-            network_id=network['id'],
231
+            network_id=net_1['id'],
234 232
             cidr=net['subnets'][0]['cidr'])
235 233
 
236 234
         logger.info("Check network was created.")
237
-        assert_true(
238
-            os_conn.get_network(network['name'])['id'] == network['id']
239
-        )
235
+        assert_true(os_conn.get_network(net_1['name'])['id'] == net_1['id'])
240 236
 
241 237
         logger.info("Add net_1 to default router")
242 238
         router = os_conn.get_router(os_conn.get_network(self.ext_net_name))
@@ -246,42 +242,37 @@ class TestDVSDestructive(TestBasic):
246 242
         self.show_step(3)
247 243
         self.show_step(4)
248 244
         instances = openstack.create_instances(
249
-            os_conn=os_conn, nics=[{'net-id': network['id']}], vm_count=1,
250
-            security_groups=[security_group.name]
251
-        )
245
+            os_conn=os_conn,
246
+            nics=[{'net-id': net_1['id']}],
247
+            vm_count=1,
248
+            security_groups=[security_group.name])
252 249
         openstack.verify_instance_state(os_conn)
253 250
 
254 251
         ports = os_conn.neutron.list_ports()['ports']
255 252
         fips = openstack.create_and_assign_floating_ips(os_conn, instances)
256 253
 
257
-        inst_ips = [os_conn.get_nova_instance_ip(
258
-            instance, net_name=network['name']) for instance in instances]
254
+        inst_ips = [os_conn.get_nova_instance_ip(i, net_name=net_1['name'])
255
+                    for i in instances]
259 256
         inst_ports = [p for p in ports
260 257
                       if p['fixed_ips'][0]['ip_address'] in inst_ips]
261 258
 
262 259
         self.show_step(5)
260
+        _body = {'port': {'admin_state_up': False}}
263 261
         for port in inst_ports:
264
-            os_conn.neutron.update_port(
265
-                port['id'], {'port': {'admin_state_up': False}}
266
-            )
262
+            os_conn.neutron.update_port(port=port['id'], body=_body)
267 263
 
268 264
         self.show_step(6)
269
-        # TODO(vgorin) create better solution for this step
270 265
         try:
271 266
             openstack.ping_each_other(fips)
272
-            checker = 1
273 267
         except Exception as e:
274 268
             logger.info(e)
275
-            checker = 0
276
-
277
-        if checker:
269
+        else:
278 270
             fail('Ping is available between instances')
279 271
 
280 272
         self.show_step(7)
273
+        _body = {'port': {'admin_state_up': True}}
281 274
         for port in inst_ports:
282
-            os_conn.neutron.update_port(
283
-                port['id'], {'port': {'admin_state_up': True}}
284
-            )
275
+            os_conn.neutron.update_port(port=port['id'], body=_body)
285 276
 
286 277
         self.show_step(8)
287 278
         openstack.ping_each_other(fips, timeout=90)
@@ -290,19 +281,19 @@ class TestDVSDestructive(TestBasic):
290 281
           groups=["dvs_destructive_setup_2"])
291 282
     @log_snapshot_after_test
292 283
     def dvs_destructive_setup_2(self):
293
-        """Verify that vmclusters should be migrate after reset controller.
284
+        """Verify that vmclusters migrate after reset controller.
294 285
 
295 286
         Scenario:
296
-            1. Upload plugins to the master node
287
+            1. Upload plugins to the master node.
297 288
             2. Install plugin.
298 289
             3. Configure cluster with 2 vcenter clusters.
299 290
             4. Add 3 node with controller role.
300 291
             5. Add 2 node with compute role.
301
-            6. Configure vcenter
292
+            6. Configure vcenter.
302 293
             7. Deploy the cluster.
303 294
             8. Run smoke OSTF tests
304 295
             9. Launch instances. 1 per az. Assign floating ips.
305
-            10. Make snapshot
296
+            10. Make snapshot.
306 297
 
307 298
         Duration: 1.8 hours
308 299
         Snapshot: dvs_destructive_setup_2
@@ -325,14 +316,12 @@ class TestDVSDestructive(TestBasic):
325 316
 
326 317
         self.show_step(3)
327 318
         self.show_step(4)
328
-        self.fuel_web.update_nodes(
329
-            cluster_id,
330
-            {'slave-01': ['controller'],
331
-             'slave-02': ['controller'],
332
-             'slave-03': ['controller'],
333
-             'slave-04': ['compute'],
334
-             'slave-05': ['compute']}
335
-        )
319
+        self.fuel_web.update_nodes(cluster_id,
320
+                                   {'slave-01': ['controller'],
321
+                                    'slave-02': ['controller'],
322
+                                    'slave-03': ['controller'],
323
+                                    'slave-04': ['compute'],
324
+                                    'slave-05': ['compute']})
336 325
         self.show_step(6)
337 326
         self.fuel_web.vcenter_configure(cluster_id, multiclusters=True)
338 327
 
@@ -340,8 +329,7 @@ class TestDVSDestructive(TestBasic):
340 329
         self.fuel_web.deploy_cluster_wait(cluster_id)
341 330
 
342 331
         self.show_step(8)
343
-        self.fuel_web.run_ostf(
344
-            cluster_id=cluster_id, test_sets=['smoke'])
332
+        self.fuel_web.run_ostf(cluster_id=cluster_id, test_sets=['smoke'])
345 333
 
346 334
         self.show_step(9)
347 335
         os_ip = self.fuel_web.get_public_vip(cluster_id)
@@ -354,9 +342,10 @@ class TestDVSDestructive(TestBasic):
354 342
 
355 343
         network = os_conn.nova.networks.find(label=self.inter_net_name)
356 344
         instances = openstack.create_instances(
357
-            os_conn=os_conn, nics=[{'net-id': network.id}], vm_count=1,
358
-            security_groups=[security_group.name]
359
-        )
345
+            os_conn=os_conn,
346
+            nics=[{'net-id': network.id}],
347
+            vm_count=1,
348
+            security_groups=[security_group.name])
360 349
         openstack.verify_instance_state(os_conn)
361 350
         openstack.create_and_assign_floating_ips(os_conn, instances)
362 351
 
@@ -367,16 +356,16 @@ class TestDVSDestructive(TestBasic):
367 356
           groups=["dvs_vcenter_reset_controller"])
368 357
     @log_snapshot_after_test
369 358
     def dvs_vcenter_reset_controller(self):
370
-        """Verify that vmclusters should be migrate after reset controller.
359
+        """Verify that vmclusters migrate after reset controller.
371 360
 
372 361
         Scenario:
373 362
             1. Revert to 'dvs_destructive_setup_2' snapshot.
374 363
             2. Verify connection between instances. Send ping,
375
-               check that ping get reply
364
+               check that ping get reply.
376 365
             3. Reset controller.
377 366
             4. Check that vmclusters migrate to another controller.
378
-            5. Verify connection between instances.
379
-                Send ping, check that ping get reply
367
+            5. Verify connection between instances. Send ping, check that
368
+               ping get reply.
380 369
 
381 370
         Duration: 1.8 hours
382 371
 
@@ -393,11 +382,9 @@ class TestDVSDestructive(TestBasic):
393 382
 
394 383
         self.show_step(2)
395 384
         srv_list = os_conn.get_servers()
396
-        fips = []
397
-        for srv in srv_list:
398
-            fips.append(os_conn.get_nova_instance_ip(
399
-                srv, net_name=self.inter_net_name, addrtype='floating'))
400
-
385
+        fips = [os_conn.get_nova_instance_ip(s, net_name=self.inter_net_name,
386
+                                             addrtype='floating')
387
+                for s in srv_list]
401 388
         openstack.ping_each_other(fips)
402 389
 
403 390
         d_ctrl = self.fuel_web.get_nailgun_primary_node(
@@ -417,16 +404,16 @@ class TestDVSDestructive(TestBasic):
417 404
           groups=["dvs_vcenter_shutdown_controller"])
418 405
     @log_snapshot_after_test
419 406
     def dvs_vcenter_shutdown_controller(self):
420
-        """Verify that vmclusters should be migrate after shutdown controller.
407
+        """Verify that vmclusters migrate after shutdown controller.
421 408
 
422 409
         Scenario:
423 410
             1. Revert to 'dvs_destructive_setup_2' snapshot.
424
-            2.  Verify connection between instances. Send ping,
411
+            2. Verify connection between instances. Send ping,
425 412
                check that ping get reply.
426 413
             3. Shutdown controller.
427
-            4. Check that vmclusters should be migrate to another controller.
414
+            4. Check that vmclusters migrate to another controller.
428 415
             5. Verify connection between instances.
429
-                Send ping, check that ping get reply
416
+               Send ping, check that ping get reply
430 417
 
431 418
         Duration: 1.8 hours
432 419
 
@@ -443,10 +430,9 @@ class TestDVSDestructive(TestBasic):
443 430
 
444 431
         self.show_step(2)
445 432
         srv_list = os_conn.get_servers()
446
-        fips = []
447
-        for srv in srv_list:
448
-            fips.append(os_conn.get_nova_instance_ip(
449
-                srv, net_name=self.inter_net_name, addrtype='floating'))
433
+        fips = [os_conn.get_nova_instance_ip(
434
+                    srv, net_name=self.inter_net_name, addrtype='floating')
435
+                for srv in srv_list]
450 436
         openstack.ping_each_other(fips)
451 437
 
452 438
         n_ctrls = self.fuel_web.get_nailgun_cluster_nodes_by_roles(
@@ -469,7 +455,7 @@ class TestDVSDestructive(TestBasic):
469 455
           groups=["dvs_reboot_vcenter_1"])
470 456
     @log_snapshot_after_test
471 457
     def dvs_reboot_vcenter_1(self):
472
-        """Verify that vmclusters should be migrate after reset controller.
458
+        """Verify that vmclusters migrate after reset controller.
473 459
 
474 460
         Scenario:
475 461
             1. Install DVS plugin on master node.
@@ -495,15 +481,14 @@ class TestDVSDestructive(TestBasic):
495 481
                 and flavor m1.micro.
496 482
             12. Launch instance VM_2 with image TestVM-VMDK, availability zone
497 483
                 vcenter and flavor m1.micro.
498
-            13. Check connection between instances, send ping from VM_1 to VM_2
499
-                and vice verse.
484
+            13. Verify connection between instances: check that VM_1 and VM_2
485
+                can ping each other.
500 486
             14. Reboot vcenter.
501 487
             15. Check that controller lost connection with vCenter.
502 488
             16. Wait for vCenter.
503 489
             17. Ensure that all instances from vCenter displayed in dashboard.
504 490
             18. Run OSTF.
505 491
 
506
-
507 492
         Duration: 2.5 hours
508 493
 
509 494
         """
@@ -525,13 +510,11 @@ class TestDVSDestructive(TestBasic):
525 510
         self.show_step(3)
526 511
         self.show_step(4)
527 512
         self.show_step(5)
528
-        self.fuel_web.update_nodes(
529
-            cluster_id,
530
-            {'slave-01': ['controller'],
531
-             'slave-02': ['compute'],
532
-             'slave-03': ['cinder-vmware'],
533
-             'slave-04': ['cinder']}
534
-        )
513
+        self.fuel_web.update_nodes(cluster_id,
514
+                                   {'slave-01': ['controller'],
515
+                                    'slave-02': ['compute'],
516
+                                    'slave-03': ['cinder-vmware'],
517
+                                    'slave-04': ['cinder']})
535 518
 
536 519
         self.show_step(6)
537 520
         plugin.enable_plugin(cluster_id, self.fuel_web)
@@ -558,7 +541,7 @@ class TestDVSDestructive(TestBasic):
558 541
           groups=["dvs_reboot_vcenter_2"])
559 542
     @log_snapshot_after_test
560 543
     def dvs_reboot_vcenter_2(self):
561
-        """Verify that vmclusters should be migrate after reset controller.
544
+        """Verify that vmclusters migrate after reset controller.
562 545
 
563 546
         Scenario:
564 547
             1. Install DVS plugin on master node.
@@ -585,15 +568,14 @@ class TestDVSDestructive(TestBasic):
585 568
                 and flavor m1.micro.
586 569
             12. Launch instance VM_2 with image TestVM-VMDK, availability zone
587 570
                 vcenter and flavor m1.micro.
588
-            13. Check connection between instances, send ping from VM_1 to VM_2
589
-                and vice verse.
571
+            13. Verify connection between instances: check that VM_1 to VM_2
572
+                can ping each other.
590 573
             14. Reboot vcenter.
591 574
             15. Check that controller lost connection with vCenter.
592 575
             16. Wait for vCenter.
593 576
             17. Ensure that all instances from vCenter displayed in dashboard.
594 577
             18. Run Smoke OSTF.
595 578
 
596
-
597 579
         Duration: 2.5 hours
598 580
 
599 581
         """
@@ -615,25 +597,21 @@ class TestDVSDestructive(TestBasic):
615 597
         self.show_step(3)
616 598
         self.show_step(4)
617 599
         self.show_step(5)
618
-        self.fuel_web.update_nodes(
619
-            cluster_id,
620
-            {'slave-01': ['controller'],
621
-             'slave-02': ['compute'],
622
-             'slave-03': ['cinder-vmware'],
623
-             'slave-04': ['cinder'],
624
-             'slave-05': ['compute-vmware']}
625
-        )
600
+        self.fuel_web.update_nodes(cluster_id,
601
+                                   {'slave-01': ['controller'],
602
+                                    'slave-02': ['compute'],
603
+                                    'slave-03': ['cinder-vmware'],
604
+                                    'slave-04': ['cinder'],
605
+                                    'slave-05': ['compute-vmware']})
626 606
 
627 607
         self.show_step(6)
628 608
         plugin.enable_plugin(cluster_id, self.fuel_web)
629 609
 
630 610
         self.show_step(7)
631 611
         target_node_1 = self.node_name('slave-05')
632
-        self.fuel_web.vcenter_configure(
633
-            cluster_id,
634
-            target_node_1=target_node_1,
635
-            multiclusters=False
636
-        )
612
+        self.fuel_web.vcenter_configure(cluster_id,
613
+                                        target_node_1=target_node_1,
614
+                                        multiclusters=False)
637 615
 
638 616
         self.show_step(8)
639 617
         self.fuel_web.verify_network(cluster_id)

+ 57
- 62
plugin_test/tests/test_plugin_vmware_dvs_maintenance.py View File

@@ -54,22 +54,16 @@ class TestDVSMaintenance(TestBasic):
54 54
         """Deploy cluster with plugin and vmware datastore backend.
55 55
 
56 56
         Scenario:
57
-            1. Upload plugins to the master node
58
-            2. Install plugin.
59
-            3. Create cluster with vcenter.
60
-            4. Add 3 node with controller role.
61
-            5. Add 2 node with compute + ceph role.
62
-            6. Add 1 node with compute-vmware + cinder vmware role.
63
-            7. Deploy the cluster.
64
-            8. Run OSTF.
65
-            9. Create non default network.
66
-            10. Create Security groups
67
-            11. Launch instances with created network in nova and vcenter az.
68
-            12. Attached created security groups to instances.
69
-            13. Check connection between instances from different az.
57
+            1. Revert to dvs_bvt snapshot.
58
+            2. Create non default network net_1.
59
+            3. Launch instances with created network in nova and vcenter az.
60
+            4. Create Security groups.
61
+            5. Attached created security groups to instances.
62
+            6. Check connection between instances from different az.
70 63
 
71 64
         Duration: 1.8 hours
72 65
         """
66
+        self.show_step(1)
73 67
         self.env.revert_snapshot("dvs_bvt")
74 68
 
75 69
         cluster_id = self.fuel_web.get_last_created_cluster()
@@ -81,80 +75,81 @@ class TestDVSMaintenance(TestBasic):
81 75
             SERVTEST_TENANT)
82 76
 
83 77
         tenant = os_conn.get_tenant(SERVTEST_TENANT)
84
-        # Create non default network with subnet.
78
+
79
+        # Create non default network with subnet
80
+        self.show_step(2)
81
+
85 82
         logger.info('Create network {}'.format(self.net_data[0].keys()[0]))
86
-        network = os_conn.create_network(
83
+        net_1 = os_conn.create_network(
87 84
             network_name=self.net_data[0].keys()[0],
88 85
             tenant_id=tenant.id)['network']
89 86
 
90 87
         subnet = os_conn.create_subnet(
91
-            subnet_name=network['name'],
92
-            network_id=network['id'],
88
+            subnet_name=net_1['name'],
89
+            network_id=net_1['id'],
93 90
             cidr=self.net_data[0][self.net_data[0].keys()[0]],
94 91
             ip_version=4)
95 92
 
96
-        # Check that network are created.
97
-        assert_true(
98
-            os_conn.get_network(network['name'])['id'] == network['id']
99
-        )
93
+        # Check that network are created
94
+        assert_true(os_conn.get_network(net_1['name'])['id'] == net_1['id'])
95
+
100 96
         # Add net_1 to default router
101 97
         router = os_conn.get_router(os_conn.get_network(self.ext_net_name))
102
-        os_conn.add_router_interface(
103
-            router_id=router["id"],
104
-            subnet_id=subnet["id"])
105
-        # Launch instance 2 VMs of vcenter and 2 VMs of nova
106
-        # in the tenant network net_01
107
-        openstack.create_instances(
108
-            os_conn=os_conn, vm_count=1,
109
-            nics=[{'net-id': network['id']}]
110
-        )
111
-        # Launch instance 2 VMs of vcenter and 2 VMs of nova
112
-        # in the default network
113
-        network = os_conn.nova.networks.find(label=self.inter_net_name)
114
-        instances = openstack.create_instances(
115
-            os_conn=os_conn, vm_count=1,
116
-            nics=[{'net-id': network.id}])
98
+        os_conn.add_router_interface(router_id=router["id"],
99
+                                     subnet_id=subnet["id"])
100
+
101
+        self.show_step(3)
102
+
103
+        # Launch 2 vcenter VMs and 2 nova VMs in the tenant network net_01
104
+        openstack.create_instances(os_conn=os_conn,
105
+                                   vm_count=1,
106
+                                   nics=[{'net-id': net_1['id']}])
107
+
108
+        # Launch 2 vcenter VMs and 2 nova VMs in the default network
109
+        net_1 = os_conn.nova.networks.find(label=self.inter_net_name)
110
+        instances = openstack.create_instances(os_conn=os_conn,
111
+                                               vm_count=1,
112
+                                               nics=[{'net-id': net_1.id}])
117 113
         openstack.verify_instance_state(os_conn)
118 114
 
115
+        self.show_step(4)
116
+
119 117
         # Create security groups SG_1 to allow ICMP traffic.
120 118
         # Add Ingress rule for ICMP protocol to SG_1
121 119
         # Create security groups SG_2 to allow TCP traffic 22 port.
122 120
         # Add Ingress rule for TCP protocol to SG_2
123 121
         sec_name = ['SG1', 'SG2']
124
-        sg1 = os_conn.nova.security_groups.create(
125
-            sec_name[0], "descr")
126
-        sg2 = os_conn.nova.security_groups.create(
127
-            sec_name[1], "descr")
128
-        rulesets = [
129
-            {
130
-                # ssh
131
-                'ip_protocol': 'tcp',
132
-                'from_port': 22,
133
-                'to_port': 22,
134
-                'cidr': '0.0.0.0/0',
135
-            },
136
-            {
137
-                # ping
138
-                'ip_protocol': 'icmp',
139
-                'from_port': -1,
140
-                'to_port': -1,
141
-                'cidr': '0.0.0.0/0',
142
-            }
143
-        ]
144
-        os_conn.nova.security_group_rules.create(
145
-            sg1.id, **rulesets[0]
146
-        )
147
-        os_conn.nova.security_group_rules.create(
148
-            sg2.id, **rulesets[1]
149
-        )
122
+        sg1 = os_conn.nova.security_groups.create(sec_name[0], "descr")
123
+        sg2 = os_conn.nova.security_groups.create(sec_name[1], "descr")
124
+        rulesets = [{
125
+            # ssh
126
+            'ip_protocol': 'tcp',
127
+            'from_port': 22,
128
+            'to_port': 22,
129
+            'cidr': '0.0.0.0/0',
130
+        }, {
131
+            # ping
132
+            'ip_protocol': 'icmp',
133
+            'from_port': -1,
134
+            'to_port': -1,
135
+            'cidr': '0.0.0.0/0',
136
+        }]
137
+        os_conn.nova.security_group_rules.create(sg1.id, **rulesets[0])
138
+        os_conn.nova.security_group_rules.create(sg2.id, **rulesets[1])
139
+
150 140
         # Remove default security group and attach SG_1 and SG2 to VMs
141
+        self.show_step(5)
142
+
151 143
         srv_list = os_conn.get_servers()
152 144
         for srv in srv_list:
153 145
             srv.remove_security_group(srv.security_groups[0]['name'])
154 146
             srv.add_security_group(sg1.id)
155 147
             srv.add_security_group(sg2.id)
156 148
         fip = openstack.create_and_assign_floating_ips(os_conn, instances)
149
+
157 150
         # Check ping between VMs
151
+        self.show_step(6)
152
+
158 153
         ip_pair = dict.fromkeys(fip)
159 154
         for key in ip_pair:
160 155
             ip_pair[key] = [value for value in fip if key != value]

+ 50
- 42
plugin_test/tests/test_plugin_vmware_dvs_smoke.py View File

@@ -26,12 +26,12 @@ TestBasic = fuelweb_test.tests.base_test_case.TestBasic
26 26
 SetupEnvironment = fuelweb_test.tests.base_test_case.SetupEnvironment
27 27
 
28 28
 
29
-@test(groups=["plugins", 'dvs_vcenter_plugin', 'dvs_vcenter_smoke'])
29
+@test(groups=['plugins', 'dvs_vcenter_plugin', 'dvs_vcenter_smoke'])
30 30
 class TestDVSSmoke(TestBasic):
31 31
     """Smoke test suite.
32 32
 
33 33
     The goal of smoke testing is to ensure that the most critical features
34
-    of Fuel VMware DVS plugin work  after new build delivery. Smoke tests
34
+    of Fuel VMware DVS plugin work after new build delivery. Smoke tests
35 35
     will be used by QA to accept software builds from Development team.
36 36
     """
37 37
 
@@ -42,7 +42,7 @@ class TestDVSSmoke(TestBasic):
42 42
         """Check that plugin can be installed.
43 43
 
44 44
         Scenario:
45
-            1. Upload plugins to the master node
45
+            1. Upload plugins to the master node.
46 46
             2. Install plugin.
47 47
             3. Ensure that plugin is installed successfully using cli,
48 48
                run command 'fuel plugins'. Check name, version of plugin.
@@ -59,19 +59,17 @@ class TestDVSSmoke(TestBasic):
59 59
         cmd = 'fuel plugins list'
60 60
 
61 61
         output = self.ssh_manager.execute_on_remote(
62
-            ip=self.ssh_manager.admin_ip,
63
-            cmd=cmd)['stdout'].pop().split(' ')
62
+            ip=self.ssh_manager.admin_ip, cmd=cmd
63
+        )['stdout'].pop().split(' ')
64 64
 
65 65
         # check name
66 66
         assert_true(
67 67
             plugin.plugin_name in output,
68
-            "Plugin '{0}' is not installed.".format(plugin.plugin_name)
69
-        )
68
+            "Plugin '{0}' is not installed.".format(plugin.plugin_name))
70 69
         # check version
71 70
         assert_true(
72 71
             plugin.DVS_PLUGIN_VERSION in output,
73
-            "Plugin '{0}' is not installed.".format(plugin.plugin_name)
74
-        )
72
+            "Plugin '{0}' is not installed.".format(plugin.plugin_name))
75 73
         self.env.make_snapshot("dvs_install", is_make=True)
76 74
 
77 75
     @test(depends_on=[dvs_install],
@@ -85,7 +83,6 @@ class TestDVSSmoke(TestBasic):
85 83
             2. Remove plugin.
86 84
             3. Verify that plugin is removed, run command 'fuel plugins'.
87 85
 
88
-
89 86
         Duration: 5 min
90 87
 
91 88
         """
@@ -96,21 +93,17 @@ class TestDVSSmoke(TestBasic):
96 93
         cmd = 'fuel plugins --remove {0}=={1}'.format(
97 94
             plugin.plugin_name, plugin.DVS_PLUGIN_VERSION)
98 95
 
99
-        self.ssh_manager.execute_on_remote(
100
-            ip=self.ssh_manager.admin_ip,
101
-            cmd=cmd,
102
-            err_msg='Can not remove plugin.'
103
-        )
96
+        self.ssh_manager.execute_on_remote(ip=self.ssh_manager.admin_ip,
97
+                                           cmd=cmd,
98
+                                           err_msg='Can not remove plugin.')
104 99
 
105 100
         self.show_step(3)
106 101
         output = self.ssh_manager.execute_on_remote(
107 102
             ip=self.ssh_manager.admin_ip,
108 103
             cmd='fuel plugins list')['stdout'].pop().split(' ')
109 104
 
110
-        assert_true(
111
-            plugin.plugin_name not in output,
112
-            "Plugin '{0}' is not removed".format(plugin.plugin_name)
113
-        )
105
+        assert_true(plugin.plugin_name not in output,
106
+                    "Plugin '{0}' is not removed".format(plugin.plugin_name))
114 107
 
115 108
     @test(depends_on=[dvs_install],
116 109
           groups=["dvs_vcenter_smoke"])
@@ -119,7 +112,7 @@ class TestDVSSmoke(TestBasic):
119 112
         """Check deployment with VMware DVS plugin and one controller.
120 113
 
121 114
         Scenario:
122
-            1. Upload plugins to the master node
115
+            1. Upload plugins to the master node.
123 116
             2. Install plugin.
124 117
             3. Create a new environment with following parameters:
125 118
                 * Compute: KVM/QEMU with vCenter
@@ -130,7 +123,7 @@ class TestDVSSmoke(TestBasic):
130 123
             5. Configure interfaces on nodes.
131 124
             6. Configure network settings.
132 125
             7. Enable and configure DVS plugin.
133
-            8  Configure VMware vCenter Settings.
126
+            8. Configure VMware vCenter Settings.
134 127
                Add 1 vSphere clusters and configure Nova Compute instances
135 128
                on controllers.
136 129
             9. Deploy the cluster.
@@ -139,9 +132,13 @@ class TestDVSSmoke(TestBasic):
139 132
         Duration: 1.8 hours
140 133
 
141 134
         """
135
+        self.show_step(1)
136
+        self.show_step(2)
142 137
         self.env.revert_snapshot("dvs_install")
143 138
 
144 139
         # Configure cluster with 2 vcenter clusters
140
+        self.show_step(3)
141
+
145 142
         cluster_id = self.fuel_web.create_cluster(
146 143
             name=self.__class__.__name__,
147 144
             mode=DEPLOYMENT_MODE,
@@ -150,22 +147,24 @@ class TestDVSSmoke(TestBasic):
150 147
                 "net_segment_type": NEUTRON_SEGMENT_TYPE
151 148
             }
152 149
         )
153
-        plugin.enable_plugin(
154
-            cluster_id, self.fuel_web, multiclusters=False)
150
+        plugin.enable_plugin(cluster_id, self.fuel_web, multiclusters=False)
155 151
 
156 152
         # Assign role to node
157
-        self.fuel_web.update_nodes(
158
-            cluster_id,
159
-            {'slave-01': ['controller']}
160
-        )
153
+        self.show_step(4)
154
+        self.fuel_web.update_nodes(cluster_id, {'slave-01': ['controller']})
161 155
 
162 156
         # Configure VMWare vCenter settings
157
+        self.show_step(5)
158
+        self.show_step(6)
159
+        self.show_step(7)
160
+        self.show_step(8)
163 161
         self.fuel_web.vcenter_configure(cluster_id)
164 162
 
163
+        self.show_step(9)
165 164
         self.fuel_web.deploy_cluster_wait(cluster_id)
166 165
 
167
-        self.fuel_web.run_ostf(
168
-            cluster_id=cluster_id, test_sets=['smoke'])
166
+        self.show_step(10)
167
+        self.fuel_web.run_ostf(cluster_id=cluster_id, test_sets=['smoke'])
169 168
 
170 169
 
171 170
 @test(groups=["plugins", 'dvs_vcenter_bvt'])
@@ -179,7 +178,7 @@ class TestDVSBVT(TestBasic):
179 178
         """Deploy cluster with DVS plugin and ceph storage.
180 179
 
181 180
         Scenario:
182
-            1. Upload plugins to the master node
181
+            1. Upload plugins to the master node.
183 182
             2. Install plugin.
184 183
             3. Create a new environment with following parameters:
185 184
                 * Compute: KVM/QEMU with vCenter
@@ -209,10 +208,13 @@ class TestDVSBVT(TestBasic):
209 208
         """
210 209
         self.env.revert_snapshot("ready_with_9_slaves")
211 210
 
212
-        plugin.install_dvs_plugin(
213
-            self.ssh_manager.admin_ip)
211
+        self.show_step(1)
212
+        self.show_step(2)
213
+        plugin.install_dvs_plugin(self.ssh_manager.admin_ip)
214 214
 
215 215
         # Configure cluster with 2 vcenter clusters and vcenter glance
216
+        self.show_step(3)
217
+
216 218
         cluster_id = self.fuel_web.create_cluster(
217 219
             name=self.__class__.__name__,
218 220
             mode=DEPLOYMENT_MODE,
@@ -227,7 +229,9 @@ class TestDVSBVT(TestBasic):
227 229
         )
228 230
         plugin.enable_plugin(cluster_id, self.fuel_web)
229 231
 
230
-        # Assign role to node
232
+        # Assign roles to nodes
233
+        self.show_step(4)
234
+
231 235
         self.fuel_web.update_nodes(
232 236
             cluster_id,
233 237
             {'slave-01': ['controller'],
@@ -236,22 +240,26 @@ class TestDVSBVT(TestBasic):
236 240
              'slave-04': ['compute', 'ceph-osd'],
237 241
              'slave-05': ['compute', 'ceph-osd'],
238 242
              'slave-06': ['compute', 'ceph-osd'],
239
-             'slave-07': ['compute-vmware', 'cinder-vmware']}
240
-        )
243
+             'slave-07': ['compute-vmware', 'cinder-vmware']})
241 244
 
242 245
         # Configure VMWare vCenter settings
246
+        self.show_step(5)
247
+        self.show_step(6)
248
+        self.show_step(7)
249
+
243 250
         target_node_2 = self.fuel_web.get_nailgun_node_by_name('slave-07')
244 251
         target_node_2 = target_node_2['hostname']
245
-        self.fuel_web.vcenter_configure(
246
-            cluster_id,
247
-            target_node_2=target_node_2,
248
-            multiclusters=True
249
-        )
252
+        self.fuel_web.vcenter_configure(cluster_id,
253
+                                        target_node_2=target_node_2,
254
+                                        multiclusters=True)
250 255
 
256
+        self.show_step(8)
251 257
         self.fuel_web.verify_network(cluster_id, timeout=60 * 15)
258
+
259
+        self.show_step(9)
252 260
         self.fuel_web.deploy_cluster_wait(cluster_id, timeout=3600 * 3)
253 261
 
254
-        self.fuel_web.run_ostf(
255
-            cluster_id=cluster_id, test_sets=['smoke'])
262
+        self.show_step(10)
263
+        self.fuel_web.run_ostf(cluster_id=cluster_id, test_sets=['smoke'])
256 264
 
257 265
         self.env.make_snapshot("dvs_bvt", is_make=True)

+ 599
- 698
plugin_test/tests/test_plugin_vmware_dvs_system.py
File diff suppressed because it is too large
View File


+ 37
- 34
plugin_test/tests/test_plugin_vmware_dvs_templates.py View File

@@ -41,7 +41,7 @@ class TestNetworkTemplates(TestNetworkTemplatesBase, TestBasic):
41 41
         return self.fuel_web.get_nailgun_node_by_name(name_node)['hostname']
42 42
 
43 43
     def get_network_template(self, template_name):
44
-        """Get netwok template.
44
+        """Get network template.
45 45
 
46 46
         param: template_name: type string, name of file
47 47
         """
@@ -61,7 +61,7 @@ class TestNetworkTemplates(TestNetworkTemplatesBase, TestBasic):
61 61
             1. Upload plugins to the master node.
62 62
             2. Install plugin.
63 63
             3. Create cluster with vcenter.
64
-            4. Set CephOSD as backend for Glance and Cinder
64
+            4. Set CephOSD as backend for Glance and Cinder.
65 65
             5. Add nodes with following roles:
66 66
                 controller
67 67
                 compute-vmware
@@ -70,8 +70,8 @@ class TestNetworkTemplates(TestNetworkTemplatesBase, TestBasic):
70 70
                 3 ceph-osd
71 71
             6. Upload network template.
72 72
             7. Check network configuration.
73
-            8. Deploy the cluster
74
-            9. Run OSTF
73
+            8. Deploy the cluster.
74
+            9. Run OSTF.
75 75
 
76 76
         Duration 2.5 hours
77 77
 
@@ -80,9 +80,12 @@ class TestNetworkTemplates(TestNetworkTemplatesBase, TestBasic):
80 80
         """
81 81
         self.env.revert_snapshot("ready_with_9_slaves")
82 82
 
83
-        plugin.install_dvs_plugin(
84
-            self.ssh_manager.admin_ip)
83
+        self.show_step(1)
84
+        self.show_step(2)
85
+        plugin.install_dvs_plugin(self.ssh_manager.admin_ip)
85 86
 
87
+        self.show_step(3)
88
+        self.show_step(4)
86 89
         cluster_id = self.fuel_web.create_cluster(
87 90
             name=self.__class__.__name__,
88 91
             mode=DEPLOYMENT_MODE,
@@ -101,29 +104,26 @@ class TestNetworkTemplates(TestNetworkTemplatesBase, TestBasic):
101 104
 
102 105
         plugin.enable_plugin(cluster_id, self.fuel_web)
103 106
 
104
-        self.fuel_web.update_nodes(
105
-            cluster_id,
106
-            {
107
-                'slave-01': ['controller'],
108
-                'slave-02': ['compute-vmware'],
109
-                'slave-03': ['compute-vmware'],
110
-                'slave-04': ['compute'],
111
-                'slave-05': ['ceph-osd'],
112
-                'slave-06': ['ceph-osd'],
113
-                'slave-07': ['ceph-osd'],
114
-            },
115
-            update_interfaces=False
116
-        )
107
+        self.show_step(5)
108
+        self.fuel_web.update_nodes(cluster_id,
109
+                                   {'slave-01': ['controller'],
110
+                                    'slave-02': ['compute-vmware'],
111
+                                    'slave-03': ['compute-vmware'],
112
+                                    'slave-04': ['compute'],
113
+                                    'slave-05': ['ceph-osd'],
114
+                                    'slave-06': ['ceph-osd'],
115
+                                    'slave-07': ['ceph-osd'],},
116
+                                   update_interfaces=False)
117 117
 
118 118
         # Configure VMWare vCenter settings
119
+        self.show_step(6)
120
+
119 121
         target_node_1 = self.node_name('slave-02')
120 122
         target_node_2 = self.node_name('slave-03')
121
-        self.fuel_web.vcenter_configure(
122
-            cluster_id,
123
-            target_node_1=target_node_1,
124
-            target_node_2=target_node_2,
125
-            multiclusters=True
126
-        )
123
+        self.fuel_web.vcenter_configure(cluster_id,
124
+                                        target_node_1=target_node_1,
125
+                                        target_node_2=target_node_2,
126
+                                        multiclusters=True)
127 127
 
128 128
         network_template = self.get_network_template('default')
129 129
         self.fuel_web.client.upload_network_template(
@@ -138,21 +138,24 @@ class TestNetworkTemplates(TestNetworkTemplatesBase, TestBasic):
138 138
         logger.debug('Networks: {0}'.format(
139 139
             self.fuel_web.client.get_network_groups()))
140 140
 
141
+        self.show_step(7)
141 142
         self.fuel_web.verify_network(cluster_id)
142 143
 
144
+        self.show_step(8)
143 145
         self.fuel_web.deploy_cluster_wait(cluster_id, timeout=180 * 60)
144
-
145 146
         self.fuel_web.verify_network(cluster_id)
146 147
 
147
-        self.check_ipconfig_for_template(cluster_id, network_template,
148
-                                         networks)
148
+        self.check_ipconfig_for_template(
149
+            cluster_id, network_template, networks)
149 150
         self.check_services_networks(cluster_id, network_template)
150 151
 
151
-        self.fuel_web.run_ostf(cluster_id=cluster_id,
152
-                               timeout=3600,
153
-                               test_sets=['smoke', 'sanity',
154
-                                          'ha', 'tests_platform'])
155
-        self.check_ipconfig_for_template(cluster_id, network_template,
156
-                                         networks)
152
+        self.show_step(9)
153
+        self.fuel_web.run_ostf(
154
+            cluster_id=cluster_id,
155
+            timeout=3600,
156
+            test_sets=['smoke', 'sanity', 'ha', 'tests_platform'])
157
+
158
+        self.check_ipconfig_for_template(
159
+            cluster_id, network_template, networks)
157 160
 
158 161
         self.check_services_networks(cluster_id, network_template)

Loading…
Cancel
Save