Browse Source

Merge "Add system tests for nsx-t plugin from Test Plan"

Jenkins 2 years ago
parent
commit
60023f81d0

+ 0
- 2
doc/test/source/test_suite_scale.rst View File

@@ -33,8 +33,6 @@ Steps
33 33
         * Networking: Neutron with NSX-T plugin
34 34
         * Storage: default
35 35
     3. Add nodes with following roles:
36
-        * Controller
37
-        * Controller
38 36
         * Controller
39 37
         * Compute
40 38
     4. Configure interfaces on nodes.

+ 158
- 268
doc/test/source/test_suite_system.rst View File

@@ -2,20 +2,20 @@ System
2 2
 ======
3 3
 
4 4
 
5
-Setup for system tests
6
-----------------------
5
+Set up for system tests
6
+-----------------------
7 7
 
8 8
 
9 9
 ID
10 10
 ##
11 11
 
12
-nsxt_ha_mode
12
+nsxt_setup_system
13 13
 
14 14
 
15 15
 Description
16 16
 ###########
17 17
 
18
-Deploy environment with 3 controlers and 1 Compute node. Nova Compute instances are running on controllers nodes. It is a config for all system tests.
18
+Deploy environment with 3 controllers and 1 Compute node. Nova Compute instances are running on controllers and compute-vmware nodes. It is a config for all system tests.
19 19
 
20 20
 
21 21
 Complexity
@@ -27,21 +27,21 @@ core
27 27
 Steps
28 28
 #####
29 29
 
30
-    1. Log in to the Fuel web UI with preinstalled NSX-T plugin.
31
-    2. Create a new environment with following parameters:
30
+    1. Log in to the Fuel web UI with pre-installed NSX-T plugin.
31
+    2. Create new environment with the following parameters:
32 32
         * Compute: KVM, QEMU with vCenter
33 33
         * Networking: Neutron with NSX-T plugin
34 34
         * Storage: default
35 35
         * Additional services: default
36 36
     3. Add nodes with following roles:
37 37
         * Controller
38
-        * Controller
39
-        * Controller
38
+        * Compute-vmware
39
+        * Compute
40 40
         * Compute
41 41
     4. Configure interfaces on nodes.
42 42
     5. Configure network settings.
43 43
     6. Enable and configure NSX-T plugin.
44
-    7. Configure VMware vCenter Settings. Add 1 vSphere cluster and configure Nova Compute instance on controllers.
44
+    7. Configure VMware vCenter Settings. Add 2 vSphere clusters, configure Nova Compute instances on controller and compute-vmware.
45 45
     8. Verify networks.
46 46
     9. Deploy cluster.
47 47
     10. Run OSTF.
@@ -50,23 +50,23 @@ Steps
50 50
 Expected result
51 51
 ###############
52 52
 
53
-Cluster should be deployed and all OSTF test cases should be passed.
53
+Cluster should be deployed and all OSTF test cases should pass.
54 54
 
55 55
 
56
-Check abilities to create and terminate networks on NSX.
57
---------------------------------------------------------
56
+Check connectivity from VMs to public network
57
+---------------------------------------------
58 58
 
59 59
 
60 60
 ID
61 61
 ##
62 62
 
63
-nsxt_create_terminate_networks
63
+nsxt_public_network_availability
64 64
 
65 65
 
66 66
 Description
67 67
 ###########
68 68
 
69
-Verifies that creation of network is translated to vcenter.
69
+Verifies that public network is available.
70 70
 
71 71
 
72 72
 Complexity
@@ -78,55 +78,10 @@ core
78 78
 Steps
79 79
 #####
80 80
 
81
-    1. Setup for system tests.
81
+    1. Set up for system tests.
82 82
     2. Log in to Horizon Dashboard.
83
-    3. Add private networks net_01 and net_02.
84
-    4. Check taht networks are present in the vcenter.
85
-    5. Remove private network net_01.
86
-    6. Check that network net_01 has been removed from the vcenter.
87
-    7. Add private network net_01.
88
-
89
-
90
-Expected result
91
-###############
92
-
93
-No errors.
94
-
95
-
96
-Check abilities to bind port on NSX to VM, disable and enable this port.
97
-------------------------------------------------------------------------
98
-
99
-
100
-ID
101
-##
102
-
103
-nsxt_ability_to_bind_port
104
-
105
-
106
-Description
107
-###########
108
-
109
-Verifies that system can not manipulate with port(plugin limitation).
110
-
111
-
112
-Complexity
113
-##########
114
-
115
-core
116
-
117
-
118
-Steps
119
-#####
120
-
121
-    1. Log in to Horizon Dashboard.
122
-    2. Navigate to Project -> Compute -> Instances
123
-    3. Launch instance VM_1 with image TestVM-VMDK and flavor m1.tiny in vcenter az.
124
-    4. Launch instance VM_2 with image TestVM and flavor m1.tiny in nova az.
125
-    5. Verify that VMs should communicate between each other. Send icmp ping from VM_1 to VM_2 and vice versa.
126
-    6. Disable NSX_port of VM_1.
127
-    7. Verify that VMs should communicate between each other. Send icmp ping from VM_2 to VM_1 and vice versa.
128
-    8. Enable NSX_port of VM_1.
129
-    9. Verify that VMs should communicate between each other. Send icmp ping from VM_1 to VM_2 and vice versa.
83
+    3. Launch two instances in default network. Instances should belong to different az (nova and vcenter).
84
+    4. Send ping from each instance to 8.8.8.8.
130 85
 
131 86
 
132 87
 Expected result
@@ -135,20 +90,20 @@ Expected result
135 90
 Pings should get a response.
136 91
 
137 92
 
138
-Check abilities to assign multiple vNIC to a single VM.
93
+Check abilities to create and terminate networks on NSX
139 94
 -------------------------------------------------------
140 95
 
141 96
 
142 97
 ID
143 98
 ##
144 99
 
145
-nsxt_multi_vnic
100
+nsxt_manage_networks
146 101
 
147 102
 
148 103
 Description
149 104
 ###########
150 105
 
151
-Check abilities to assign multiple vNICs to a single VM.
106
+Check ability to create/delete networks and attach/detach it to router.
152 107
 
153 108
 
154 109
 Complexity
@@ -160,38 +115,39 @@ core
160 115
 Steps
161 116
 #####
162 117
 
163
-    1. Setup for system tests.
118
+    1. Set up for system tests.
164 119
     2. Log in to Horizon Dashboard.
165
-    3. Add two private networks (net01 and net02).
166
-    4. Add one subnet (net01_subnet01: 192.168.101.0/24, net02_subnet01, 192.168.101.0/24) to each network.
167
-       NOTE: We have a constraint about network interfaces. One of subnets should have gateway and another should not. So disable gateway on that subnet.
168
-    5. Launch instance VM_1 with image TestVM-VMDK and flavor m1.tiny in vcenter az.
169
-    6. Launch instance VM_2 with image TestVM and flavor m1.tiny in nova az.
170
-    7. Check abilities to assign multiple vNIC net01 and net02 to VM_1.
171
-    8. Check abilities to assign multiple vNIC net01 and net02 to VM_2.
172
-    9. Send icmp ping from VM_1 to VM_2 and vice versa.
120
+    3. Create two private networks net_01 and net_02.
121
+    4. Launch 1 instance in each network. Instances should belong to different az (nova and vcenter).
122
+    5. Check that instances can't communicate with each other.
123
+    6. Attach (add interface) both networks to default router.
124
+    7. Check that instances can communicate with each other via router.
125
+    8. Detach (delete interface) both networks from default router.
126
+    9. Check that instances can't communicate with each other.
127
+    10. Delete created instances.
128
+    11. Delete created networks.
173 129
 
174 130
 
175 131
 Expected result
176 132
 ###############
177 133
 
178
-VM_1 and VM_2 should be attached to multiple vNIC net01 and net02. Pings should get a response.
134
+No errors.
179 135
 
180 136
 
181
-Check connectivity between VMs attached to different networks with a router between them.
182
------------------------------------------------------------------------------------------
137
+Check abilities to bind port on NSX to VM, disable and enable this port
138
+-----------------------------------------------------------------------
183 139
 
184 140
 
185 141
 ID
186 142
 ##
187 143
 
188
-nsxt_connectivity_diff_networks
144
+nsxt_manage_ports
189 145
 
190 146
 
191 147
 Description
192 148
 ###########
193 149
 
194
-Test verifies that there is a connection between networks connected through the router.
150
+Verifies that system can not manipulate with port (plugin limitation).
195 151
 
196 152
 
197 153
 Complexity
@@ -203,46 +159,41 @@ core
203 159
 Steps
204 160
 #####
205 161
 
206
-    1. Setup for system tests.
162
+    1. Set up for system tests.
207 163
     2. Log in to Horizon Dashboard.
208
-    3. Add two private networks (net01 and net02).
209
-    4. Add one subnet (net01_subnet01: 192.168.101.0/24, net02_subnet01, 192.168.101.0/24) to each network. Disable gateway for all subnets.
210
-    5. Navigate to Project -> Compute -> Instances
211
-    6. Launch instances VM_1 and VM_2 in the network 192.168.101.0/24 with image TestVM-VMDK and flavor m1.tiny in vcenter az. Attach default private net as a NIC 1.
212
-    7. Launch instances VM_3 and VM_4 in the network 192.168.101.0/24 with image TestVM and flavor m1.tiny in nova az. Attach default private net as a NIC 1.
213
-    8. Verify that VMs of same networks should communicate
214
-       between each other. Send icmp ping from VM_1 to VM_2, VM_3 to VM_4 and vice versa.
215
-    9. Verify that VMs of different networks should not communicate
216
-       between each other. Send icmp ping from VM_1 to VM_3, VM_4 to VM_2 and vice versa.
217
-    10. Create Router_01, set gateway and add interface to external network.
218
-    11. Enable gateway on subnets. Attach private networks to router.
219
-    12. Verify that VMs of different networks should communicate between each other. Send icmp ping from VM_1 to VM_3, VM_4 to VM_2 and vice versa.
220
-    13. Add new Router_02, set gateway and add interface to external network.
221
-    14. Detach net_02 from Router_01 and attach to Router_02
222
-    15. Assign floating IPs for all created VMs.
223
-    16. Verify that VMs of different networks should communicate between each other by FIPs. Send icmp ping from VM_1 to VM_3, VM_4 to VM_2 and vice versa.
164
+    3. Launch two instances in default network. Instances should belong to different az (nova and vcenter).
165
+    4. Check that instances can communicate with each other.
166
+    5. Disable port attached to instance in nova az.
167
+    6. Check that instances can't communicate with each other.
168
+    7. Enable port attached to instance in nova az.
169
+    8. Check that instances can communicate with each other.
170
+    9. Disable port attached to instance in vcenter az.
171
+    10. Check that instances can't communicate with each other.
172
+    11. Enable port attached to instance in vcenter az.
173
+    12. Check that instances can communicate with each other.
174
+    13. Delete created instances.
224 175
 
225 176
 
226 177
 Expected result
227 178
 ###############
228 179
 
229
-Pings should get a response.
180
+NSX-T plugin should be able to manage admin state of ports.
230 181
 
231 182
 
232
-Check isolation between VMs in different tenants.
233
--------------------------------------------------
183
+Check abilities to assign multiple vNIC to a single VM
184
+------------------------------------------------------
234 185
 
235 186
 
236 187
 ID
237 188
 ##
238 189
 
239
-nsxt_different_tenants
190
+nsxt_multiple_vnics
240 191
 
241 192
 
242 193
 Description
243 194
 ###########
244 195
 
245
-Verifies isolation in different tenants.
196
+Check abilities to assign multiple vNICs to a single VM.
246 197
 
247 198
 
248 199
 Complexity
@@ -254,102 +205,88 @@ core
254 205
 Steps
255 206
 #####
256 207
 
257
-    1. Setup for system tests.
208
+    1. Set up for system tests.
258 209
     2. Log in to Horizon Dashboard.
259
-    3. Create non-admin tenant test_tenant.
260
-    4. Navigate to Identity -> Projects.
261
-    5. Click on Create Project.
262
-    6. Type name test_tenant.
263
-    7. On tab Project Members add admin with admin and member.
264
-       Activate test_tenant project by selecting at the top panel.
265
-    8. Navigate to Project -> Network -> Networks
266
-    9. Create network with 2 subnet.
267
-       Create Router, set gateway and add interface.
268
-    10. Navigate to Project -> Compute -> Instances
269
-    11. Launch instance VM_1
270
-    12. Activate default tenant.
271
-    13. Navigate to Project -> Network -> Networks
272
-    14. Create network with subnet.
273
-        Create Router, set gateway and add interface.
274
-    15. Navigate to Project -> Compute -> Instances
275
-    16. Launch instance VM_2.
276
-    17. Verify that VMs on different tenants should not communicate between each other. Send icmp ping from VM_1 of admin tenant to VM_2 of test_tenant and vice versa.
210
+    3. Add two private networks (net01 and net02).
211
+    4. Add one subnet (net01_subnet01: 192.168.101.0/24, net02_subnet01, 192.168.101.0/24) to each network.
212
+       NOTE: We have a constraint about network interfaces. One of subnets should have gateway and another should not. So disable gateway on that subnet.
213
+    5. Launch instance VM_1 with image TestVM-VMDK and flavor m1.tiny in vcenter az.
214
+    6. Launch instance VM_2 with image TestVM and flavor m1.tiny in nova az.
215
+    7. Check abilities to assign multiple vNIC net01 and net02 to VM_1.
216
+    8. Check abilities to assign multiple vNIC net01 and net02 to VM_2.
217
+    9. Send icmp ping from VM_1 to VM_2 and vice versa.
277 218
 
278 219
 
279 220
 Expected result
280 221
 ###############
281 222
 
282
-Pings should not get a response.
223
+VM_1 and VM_2 should be attached to multiple vNIC net01 and net02. Pings should get a response.
283 224
 
284 225
 
285
-Check connectivity between VMs with same ip in different tenants.
286
------------------------------------------------------------------
226
+Check connectivity between VMs attached to different networks with a router between them
227
+----------------------------------------------------------------------------------------
287 228
 
288 229
 
289 230
 ID
290 231
 ##
291 232
 
292
-nsxt_same_ip_different_tenants
233
+nsxt_connectivity_diff_networks
293 234
 
294 235
 
295 236
 Description
296 237
 ###########
297 238
 
298
-Verifies connectivity with same IP in different tenants.
239
+Test verifies that there is a connection between networks connected through the router.
299 240
 
300 241
 
301 242
 Complexity
302 243
 ##########
303 244
 
304
-advanced
245
+core
305 246
 
306 247
 
307 248
 Steps
308 249
 #####
309 250
 
310
-    1. Setup for system tests.
251
+    1. Set up for system tests.
311 252
     2. Log in to Horizon Dashboard.
312
-    3. Create 2 non-admin tenants 'test_1' and 'test_2'.
313
-    4. Navigate to Identity -> Projects.
314
-    5. Click on Create Project.
315
-    6. Type name 'test_1' of tenant.
316
-    7. Click on Create Project.
317
-    8. Type name 'test_2' of tenant.
318
-    9. On tab Project Members add admin with admin and member.
319
-    10. In tenant 'test_1' create net1 and subnet1 with CIDR 10.0.0.0/24
320
-    11. In tenant 'test_1' create security group 'SG_1' and add rule that allows ingress icmp traffic
321
-    12. In tenant 'test_2' create net2 and subnet2 with CIDR 10.0.0.0/24
322
-    13. In tenant 'test_2' create security group 'SG_2'
323
-    14. In tenant 'test_1' add VM_1 of vcenter in net1 with ip 10.0.0.4 and 'SG_1' as security group.
324
-    15. In tenant 'test_1' add VM_2 of nova in net1 with ip 10.0.0.5 and 'SG_1' as security group.
325
-    16. In tenant 'test_2' create net1 and subnet1 with CIDR 10.0.0.0/24
326
-    17. In tenant 'test_2' create security group 'SG_1' and add rule that allows ingress icmp traffic
327
-    18. In tenant 'test_2' add VM_3 of vcenter in net1 with ip 10.0.0.4 and 'SG_1' as security group.
328
-    19. In tenant 'test_2' add VM_4 of nova in net1 with ip 10.0.0.5 and 'SG_1' as security group.
329
-    20. Assign floating IPs for all created VMs.
330
-    21. Verify that VMs with same ip on different tenants should communicate between each other by FIPs. Send icmp ping from VM_1 to VM_3, VM_2 to Vm_4 and vice versa.
253
+    3. Add two private networks (net01 and net02).
254
+    4. Add one subnet (net01_subnet01: 192.168.101.0/24, net02_subnet01, 192.168.101.0/24) to each network. Disable gateway for all subnets.
255
+    5. Launch 1 instance in each network. Instances should belong to different az (nova and vcenter).
256
+    6. Create new router (Router_01), set gateway and add interface to external network.
257
+    7. Enable gateway on subnets. Attach private networks to created router.
258
+    8. Verify that VMs of different networks should communicate between each other.
259
+    9. Add one more router (Router_02), set gateway and add interface to external network.
260
+    10. Detach net_02 from Router_01 and attach it to Router_02.
261
+    11. Assign floating IPs for all created VMs.
262
+    12. Check that default security group allow the ICMP.
263
+    13. Verify that VMs of different networks should communicate between each other by FIPs.
264
+    14. Delete instances.
265
+    15. Detach created networks from routers.
266
+    16. Delete created networks.
267
+    17. Delete created routers.
331 268
 
332 269
 
333 270
 Expected result
334 271
 ###############
335 272
 
336
-Pings should get a response.
273
+NSX-T plugin should be able to create/delete routers and assign floating ip on instances.
337 274
 
338 275
 
339
-Check connectivity Vms to public network.
340
------------------------------------------
276
+Check abilities to create and delete security group
277
+---------------------------------------------------
341 278
 
342 279
 
343 280
 ID
344 281
 ##
345 282
 
346
-nsxt_public_network_availability
283
+nsxt_manage_secgroups
347 284
 
348 285
 
349 286
 Description
350 287
 ###########
351 288
 
352
-Verifies that public network is available.
289
+Verifies that creation and removing security group works fine.
353 290
 
354 291
 
355 292
 Complexity
@@ -361,34 +298,39 @@ core
361 298
 Steps
362 299
 #####
363 300
 
364
-    1. Setup for system tests.
301
+    1. Set up for system tests.
365 302
     2. Log in to Horizon Dashboard.
366
-    3. Create net01: net01_subnet, 192.168.111.0/24 and attach it to the router04
367
-    4. Launch instance VM_1 of vcenter az with image TestVM-VMDK and flavor m1.tiny in the net_04.
368
-    5. Launch instance VM_1 of nova az with image TestVM and flavor m1.tiny in the net_01.
369
-    6. Send ping from instances VM_1 and VM_2 to 8.8.8.8.
303
+    3. Create new security group with default rules.
304
+    4. Add ingress rule for ICMP protocol.
305
+    5. Launch two instances in default network. Instances should belong to different az (nova and vcenter).
306
+    6. Attach created security group to instances.
307
+    7. Check that instances can ping each other.
308
+    8. Delete ingress rule for ICMP protocol.
309
+    9. Check that instances can't ping each other.
310
+    10. Delete instances.
311
+    11. Delete security group.
370 312
 
371 313
 
372 314
 Expected result
373 315
 ###############
374 316
 
375
-Pings should get a response.
317
+NSX-T plugin should be able to create/delete security groups and add/delete rules.
376 318
 
377 319
 
378
-Check connectivity VMs to public network with floating ip.
379
-----------------------------------------------------------
320
+Check isolation between VMs in different tenants
321
+------------------------------------------------
380 322
 
381 323
 
382 324
 ID
383 325
 ##
384 326
 
385
-nsxt_floating_ip_to_public
327
+nsxt_different_tenants
386 328
 
387 329
 
388 330
 Description
389 331
 ###########
390 332
 
391
-Verifies that public network is available via floating ip.
333
+Verifies isolation in different tenants.
392 334
 
393 335
 
394 336
 Complexity
@@ -400,34 +342,39 @@ core
400 342
 Steps
401 343
 #####
402 344
 
403
-    1. Setup for system tests.
404
-    2. Log in to Horizon Dashboard
405
-    3. Create net01: net01_subnet, 192.168.111.0/24 and attach it to the router04
406
-    4. Launch instance VM_1 of vcenter az with image TestVM-VMDK and flavor m1.tiny in the net_04. Associate floating ip.
407
-    5. Launch instance VM_1 of nova az with image TestVM and flavor m1.tiny in the net_01. Associate floating ip.
408
-    6. Send ping from instances VM_1 and VM_2 to 8.8.8.8.
345
+    1. Set up for system tests.
346
+    2. Log in to Horizon Dashboard.
347
+    3. Create new tenant with new user.
348
+    4. Activate new project.
349
+    5. Create network with subnet.
350
+    6. Create router, set gateway and add interface.
351
+    7. Launch instance and associate floating ip with vm.
352
+    8. Activate default tenant.
353
+    9. Launch instance (use the default network) and associate floating ip with vm.
354
+    10. Check that default security group allow ingress icmp traffic.
355
+    11. Send icmp ping between instances in different tenants via floating ip.
409 356
 
410 357
 
411 358
 Expected result
412 359
 ###############
413 360
 
414
-Pings should get a response
361
+Instances on different tenants can communicate between each other only via floating ip.
415 362
 
416 363
 
417
-Check abilities to create and delete security group.
418
-----------------------------------------------------
364
+Check connectivity between VMs with same ip in different tenants
365
+----------------------------------------------------------------
419 366
 
420 367
 
421 368
 ID
422 369
 ##
423 370
 
424
-nsxt_create_and_delete_secgroups
371
+nsxt_same_ip_different_tenants
425 372
 
426 373
 
427 374
 Description
428 375
 ###########
429 376
 
430
-Verifies that creation and removing security group works fine.
377
+Verifies connectivity with same IP in different tenants.
431 378
 
432 379
 
433 380
 Complexity
@@ -439,44 +386,38 @@ advanced
439 386
 Steps
440 387
 #####
441 388
 
442
-    1. Setup for system tests.
389
+    1. Set up for system tests.
443 390
     2. Log in to Horizon Dashboard.
444
-    3. Launch instance VM_1 in the tenant network net_02 with image TestVM-VMDK and flavor m1.tiny in vcenter az.
445
-    4. Launch instance VM_2 in the tenant network net_02 with image TestVM and flavor m1.tiny in nova az.
446
-    5. Create security groups SG_1 to allow ICMP traffic.
447
-    6. Add Ingress rule for ICMP protocol to SG_1
448
-    7. Attach SG_1 to VMs
449
-    8. Check ping between VM_1 and VM_2 and vice verse
450
-    9. Create security groups SG_2 to allow TCP traffic 22 port.
451
-       Add Ingress rule for TCP protocol to SG_2
452
-    10. Attach SG_2 to VMs.
453
-    11. ssh from VM_1 to VM_2 and vice verse.
454
-    12. Delete custom rules from SG_1 and SG_2.
455
-    13. Check ping and ssh aren't available from VM_1 to VM_2 and vice verse.
456
-    14. Add Ingress rule for ICMP protocol to SG_1.
457
-    15. Add Ingress rule for SSH protocol to SG_2.
458
-    16. Check ping between VM_1 and VM_2 and vice verse.
459
-    17. Check ssh from VM_1 to VM_2 and vice verse.
460
-    18. Attach VMs to default security group.
461
-    19. Delete security groups.
462
-    20. Check ping between VM_1 and VM_2 and vice verse.
463
-    21. Check SSH from VM_1 to VM_2 and vice verse.
391
+    3. Create 2 non-admin tenants 'test_1' and 'test_2' with common admin user.
392
+    4. Activate project 'test_1'.
393
+    5. Create network 'net1' and subnet 'subnet1' with CIDR 10.0.0.0/24
394
+    6. Create router 'router1' and attach 'net1' to it.
395
+    7. Create security group 'SG_1' and add rule that allows ingress icmp traffic
396
+    8. Launch two instances (VM_1 and VM_2) in created network with created security group. Instances should belong to different az (nova and vcenter).
397
+    9. Assign floating IPs for created VMs.
398
+    10. Activate project 'test_2'.
399
+    11. Create network 'net2' and subnet 'subnet2' with CIDR 10.0.0.0/24
400
+    12. Create router 'router2' and attach 'net2' to it.
401
+    13. Create security group 'SG_2' and add rule that allows ingress icmp traffic
402
+    14. Launch two instances (VM_3 and VM_4) in created network with created security group. Instances should belong to different az (nova and vcenter).
403
+    15. Assign floating IPs for created VMs.
404
+    16. Verify that VMs with same ip on different tenants communicate between each other by FIPs. Send icmp ping from VM_1 to VM_3, VM_2 to VM_4 and vice versa.
464 405
 
465 406
 
466 407
 Expected result
467 408
 ###############
468 409
 
469
-We should be able to send ICMP and TCP traffic between VMs in different tenants.
410
+Pings should get a response.
470 411
 
471 412
 
472
-Verify that only the associated MAC and IP addresses can communicate on the logical port.
473
------------------------------------------------------------------------------------------
413
+Verify that only the associated MAC and IP addresses can communicate on the logical port
414
+----------------------------------------------------------------------------------------
474 415
 
475 416
 
476 417
 ID
477 418
 ##
478 419
 
479
-nsxt_associated_addresses_communication_on_port
420
+nsxt_bind_mac_ip_on_port
480 421
 
481 422
 
482 423
 Description
@@ -494,9 +435,9 @@ core
494 435
 Steps
495 436
 #####
496 437
 
497
-    1. Setup for system tests.
438
+    1. Set up for system tests.
498 439
     2. Log in to Horizon Dashboard.
499
-    3. Launch 2 instances in each az.
440
+    3. Launch two instances in default network. Instances should belong to different az (nova and vcenter).
500 441
     4. Verify that traffic can be successfully sent from and received on the MAC and IP address associated with the logical port.
501 442
     5. Configure a new IP address from the subnet not like original one on the instance associated with the logical port.
502 443
         * ifconfig eth0 down
@@ -516,14 +457,14 @@ Expected result
516 457
 Instance should not communicate with new ip and mac addresses but it should communicate with old IP.
517 458
 
518 459
 
519
-Check creation instance in the one group simultaneously.
520
---------------------------------------------------------
460
+Check creation instance in the one group simultaneously
461
+-------------------------------------------------------
521 462
 
522 463
 
523 464
 ID
524 465
 ##
525 466
 
526
-nsxt_create_and_delete_vms
467
+nsxt_batch_instance_creation
527 468
 
528 469
 
529 470
 Description
@@ -541,7 +482,7 @@ core
541 482
 Steps
542 483
 #####
543 484
 
544
-    1. Setup for system tests.
485
+    1. Set up for system tests.
545 486
     2. Navigate to Project -> Compute -> Instances
546 487
     3. Launch 5 instance VM_1 simultaneously with image TestVM-VMDK and flavor m1.tiny in vcenter az in default net_04.
547 488
     4. All instance should be created without any error.
@@ -564,7 +505,7 @@ Verify that instances could be launched on enabled compute host
564 505
 ID
565 506
 ##
566 507
 
567
-nsxt_disable_hosts
508
+nsxt_manage_compute_hosts
568 509
 
569 510
 
570 511
 Description
@@ -582,70 +523,19 @@ core
582 523
 Steps
583 524
 #####
584 525
 
585
-    1. Setup cluster with 3 controllers, 2 Compute nodes and cinder-vmware +
586
-       compute-vmware role.
587
-    2. Assign instances in each az.
588
-    3. Disable one of compute host with vCenter cluster
589
-       (Admin -> Hypervisors).
590
-    4. Create several instances in vcenter az.
591
-    5. Check that instances were created on enabled compute host
592
-       (vcenter cluster).
593
-    6. Disable second compute host with vCenter cluster and enable
594
-       first one.
595
-    7. Create several instances in vcenter az.
596
-    8. Check that instances were created on enabled compute host
597
-       (vcenter cluster).
598
-    9. Create several instances in nova az.
599
-    10. Check that instances were created on enabled compute host
600
-        (nova cluster).
601
-
602
-
603
-Expected result
604
-###############
605
-
606
-All instances work fine.
607
-
608
-
609
-Check that settings about new cluster are placed in neutron config
610
-------------------------------------------------------------------
611
-
612
-
613
-ID
614
-##
615
-
616
-nsxt_smoke_add_compute
617
-
618
-
619
-Description
620
-###########
621
-
622
-Adding compute-vmware role and redeploy cluster with NSX-T plugin has effect in neutron configs.
623
-
624
-
625
-Complexity
626
-##########
627
-
628
-core
629
-
630
-
631
-Steps
632
-#####
633
-
634
-    1. Upload the NSX-T plugin to master node.
635
-    2. Create cluster and configure NSX-T for that cluster.
636
-    3. Provision three controller node.
637
-    4. Deploy cluster.
638
-    5. Get configured clusters morefid(Managed Object Reference) from neutron config.
639
-    6. Add node with compute-vmware role.
640
-    7. Redeploy cluster with new node.
641
-    8. Get new configured clusters morefid from neutron config.
642
-    9. Check new cluster added in neutron config.
526
+    1. Set up for system tests.
527
+    2. Disable one of compute host in each availability zone (vcenter and nova).
528
+    3. Create several instances in both az.
529
+    4. Check that instances were created on enabled compute hosts.
530
+    5. Disable second compute host and enable first one in each availability zone (vcenter and nova).
531
+    6. Create several instances in both az.
532
+    7. Check that instances were created on enabled compute hosts.
643 533
 
644 534
 
645 535
 Expected result
646 536
 ###############
647 537
 
648
-Clusters are reconfigured after compute-vmware has been added.
538
+All instances were created on enabled compute hosts.
649 539
 
650 540
 
651 541
 Fuel create mirror and update core repos on cluster with NSX-T plugin
@@ -673,7 +563,7 @@ core
673 563
 Steps
674 564
 #####
675 565
 
676
-    1. Setup for system tests
566
+    1. Set up for system tests
677 567
     2. Log into controller node via Fuel CLI and get PIDs of services which were launched by plugin and store them:
678 568
         `ps ax | grep neutron-server`
679 569
     3. Launch the following command on the Fuel Master node:
@@ -722,7 +612,7 @@ Steps
722 612
     1. Create cluster.
723 613
        Prepare 2 NSX managers.
724 614
     2. Configure plugin.
725
-    3. Set comma separtated list of NSX managers.
615
+    3. Set comma separated list of NSX managers.
726 616
        nsx_api_managers = 1.2.3.4,1.2.3.5
727 617
     4. Deploy cluster.
728 618
     5. Run OSTF.
@@ -771,10 +661,10 @@ Steps
771 661
          . ./openrc
772 662
          heat stack-create -f nsxt_stack.yaml teststack
773 663
 
774
-       Wait for status COMPLETE.
775
-    4. Run OSTF.
664
+    4. Wait for complete creation of stack.
665
+    5. Check that created instance is operable.
776 666
 
777 667
 
778 668
 Expected result
779 669
 ###############
780
-All OSTF are passed.
670
+All objects related to stack should be successfully created.

+ 7
- 6
plugin_test/helpers/openstack.py View File

@@ -116,7 +116,7 @@ def check_connection_through_host(remote, ip_pair, command='pingv4',
116 116
     :param ip_pair: type list, ips of instances
117 117
     :param remote: access point IP
118 118
     :param command: type string, key 'pingv4', 'pingv6' or 'arping'
119
-    :param  result_of_command: type integer, exit code of command execution
119
+    :param result_of_command: type integer, exit code of command execution
120 120
     :param timeout: wait to get expected result
121 121
     :param interval: interval of executing command
122 122
     """
@@ -293,7 +293,7 @@ def check_service(ip, commands):
293 293
     :param ip: ip address of node
294 294
     :param commands: type list, nova commands to execute on controller,
295 295
                      example of commands:
296
-                     ['nova-manage service list | grep vcenter-vmcluster1'
296
+                     ['nova-manage service list | grep vcenter-vmcluster1']
297 297
     """
298 298
     ssh_manager = SSHManager()
299 299
     ssh_manager.check_call(ip=ip, command='source openrc')
@@ -367,7 +367,7 @@ def verify_instance_state(os_conn, instances=None, expected_state='ACTIVE',
367 367
                              expected_state))
368 368
 
369 369
 
370
-def create_access_point(os_conn, nics, security_groups):
370
+def create_access_point(os_conn, nics, security_groups, host_num=0):
371 371
     """Create access point.
372 372
 
373 373
     Creating instance with floating ip as access point to instances
@@ -375,10 +375,11 @@ def create_access_point(os_conn, nics, security_groups):
375 375
 
376 376
     :param os_conn: type object, openstack
377 377
     :param nics: type dictionary, neutron networks to assign to instance
378
-    :param security_groups: A list of security group names
378
+    :param security_groups: list of security group names
379
+    :param host_num: index of the host
379 380
     """
380
-    # Get any available host
381
-    host = os_conn.nova.services.list(binary='nova-compute')[0]
381
+    # Get the host
382
+    host = os_conn.nova.services.list(binary='nova-compute')[host_num]
382 383
 
383 384
     access_point = create_instances(  # create access point server
384 385
         os_conn=os_conn, nics=nics,

+ 1
- 0
plugin_test/run_tests.py View File

@@ -43,6 +43,7 @@ class CloseSSHConnectionsPlugin(Plugin):
43 43
 
44 44
 def import_tests():
45 45
     from tests import test_plugin_nsxt  # noqa
46
+    from tests import test_plugin_system  # noqa
46 47
     from tests import test_plugin_integration  # noqa
47 48
     from tests import test_plugin_scale  # noqa
48 49
     from tests import test_plugin_failover  # noqa

+ 67
- 0
plugin_test/test_templates/nsxt_stack.yaml View File

@@ -0,0 +1,67 @@
1
+heat_template_version: 2013-05-23
2
+
3
+description: >
4
+  HOT template to create a new neutron network plus a router to the public
5
+  network, and for deploying servers into the new network.
6
+parameters:
7
+  admin_floating_net:
8
+    type: string
9
+    label: admin_floating_net
10
+    description: ID or name of public network for which floating IP addresses will be allocated
11
+    default: admin_floating_net
12
+  flavor:
13
+    type: string
14
+    label: flavor
15
+    description: Flavor to use for servers
16
+    default: m1.tiny
17
+  image:
18
+    type: string
19
+    label: image
20
+    description: Image to use for servers
21
+    default: TestVM-VMDK
22
+
23
+resources:
24
+  private_net:
25
+    type: OS::Neutron::Net
26
+    properties:
27
+      name: net_1
28
+
29
+  private_subnet:
30
+    type: OS::Neutron::Subnet
31
+    properties:
32
+      network_id: { get_resource: private_net }
33
+      cidr: 10.0.0.0/29
34
+      dns_nameservers: [ 8.8.8.8, 8.8.4.4 ]
35
+
36
+  router:
37
+    type: OS::Neutron::Router
38
+    properties:
39
+      external_gateway_info:
40
+        network: { get_param: admin_floating_net }
41
+
42
+  router_interface:
43
+    type: OS::Neutron::RouterInterface
44
+    properties:
45
+      router_id: { get_resource: router }
46
+      subnet_id: { get_resource: private_subnet }
47
+
48
+  master_image_server_port:
49
+    type: OS::Neutron::Port
50
+    properties:
51
+      network_id: { get_resource: private_net }
52
+      fixed_ips:
53
+        - subnet_id: { get_resource: private_subnet }
54
+
55
+  master_image_server:
56
+    type: OS::Nova::Server
57
+    properties:
58
+      name: instance_1
59
+      image: { get_param: image }
60
+      flavor: { get_param: flavor }
61
+      availability_zone: "vcenter"
62
+      networks:
63
+        - port: { get_resource: master_image_server_port }
64
+
65
+outputs:
66
+  server_info:
67
+    value: { get_attr: [master_image_server, show ] }

+ 12
- 0
plugin_test/tests/base_plugin_test.py View File

@@ -36,6 +36,18 @@ class TestNSXtBase(TestBasic):
36 36
         self.vcenter_az = 'vcenter'
37 37
         self.vmware_image = 'TestVM-VMDK'
38 38
 
39
+    def get_configured_clusters(self, node_ip):
40
+        """Get configured vcenter clusters moref id on controller.
41
+
42
+        :param node_ip: type string, ip of node
43
+        """
44
+
45
+        cmd = r"sed -rn 's/^\s*cluster_moid\s*=\s*([^ ]+)\s*$/\1/p' " \
46
+              "/etc/neutron/plugin.ini"
47
+        clusters_id = self.ssh_manager.check_call(ip=node_ip,
48
+                                                  cmd=cmd).stdout
49
+        return (clusters_id[-1]).rstrip().split(',')
50
+
39 51
     def install_nsxt_plugin(self):
40 52
         """Download and install NSX-T plugin on master node.
41 53
 

+ 0
- 1
plugin_test/tests/test_plugin_nsxt.py View File

@@ -139,7 +139,6 @@ class TestNSXtSmoke(TestNSXtBase):
139 139
         self.fuel_web.run_ostf(cluster_id=cluster_id,
140 140
                                test_sets=['smoke', 'sanity'])
141 141
 
142
-
143 142
 @test(groups=["nsxt_plugin", "nsxt_bvt_scenarios"])
144 143
 class TestNSXtBVT(TestNSXtBase):
145 144
     """NSX-t BVT scenarios"""

+ 5
- 6
plugin_test/tests/test_plugin_scale.py View File

@@ -44,8 +44,6 @@ class TestNSXtScale(TestNSXtBase):
44 44
                 * Networking: Neutron with NSX-T plugin
45 45
                 * Storage: default
46 46
             3. Add nodes with the following roles:
47
-                * Controller
48
-                * Controller
49 47
                 * Controller
50 48
                 * Compute
51 49
             4. Configure interfaces on nodes.
@@ -81,8 +79,6 @@ class TestNSXtScale(TestNSXtBase):
81 79
         self.show_step(3)  # Add nodes
82 80
         self.fuel_web.update_nodes(cluster_id,
83 81
                                    {'slave-01': ['controller'],
84
-                                    'slave-02': ['controller'],
85
-                                    'slave-03': ['controller'],
86 82
                                     'slave-04': ['compute']})
87 83
 
88 84
         self.show_step(4)  # Configure interfaces on nodes
@@ -113,8 +109,9 @@ class TestNSXtScale(TestNSXtBase):
113 109
         os_help.create_instance(os_conn, az='vcenter')
114 110
 
115 111
         self.show_step(10)  # Add 2 controller nodes
116
-        self.fuel_web.update_nodes(cluster_id, {'slave-05': ['controller'],
117
-                                                'slave-06': ['controller']})
112
+        self.fuel_web.update_nodes(cluster_id, {'slave-02': ['controller'],
113
+                                                'slave-03': ['controller']})
114
+        self.reconfigure_cluster_interfaces(cluster_id)
118 115
 
119 116
         self.show_step(11)  # Redeploy cluster
120 117
         self.fuel_web.deploy_cluster_wait(cluster_id)
@@ -216,6 +213,7 @@ class TestNSXtScale(TestNSXtBase):
216 213
 
217 214
         self.show_step(9)  # Add node with compute role
218 215
         self.fuel_web.update_nodes(cluster_id, {'slave-05': ['compute']})
216
+        self.reconfigure_cluster_interfaces(cluster_id)
219 217
 
220 218
         self.show_step(10)  # Redeploy cluster
221 219
         self.fuel_web.deploy_cluster_wait(cluster_id)
@@ -326,6 +324,7 @@ class TestNSXtScale(TestNSXtBase):
326 324
         self.show_step(10)  # Add node with compute-vmware role
327 325
         self.fuel_web.update_nodes(cluster_id,
328 326
                                    {'slave-05': ['compute-vmware']})
327
+        self.reconfigure_cluster_interfaces(cluster_id)
329 328
 
330 329
         self.show_step(11)  # Reconfigure vcenter compute clusters
331 330
         target_node2 = self.fuel_web.get_nailgun_node_by_name('slave-05')

+ 258
- 0
plugin_test/tests/test_plugin_system.py View File

@@ -0,0 +1,258 @@
1
+"""Copyright 2016 Mirantis, Inc.
2
+
3
+Licensed under the Apache License, Version 2.0 (the "License"); you may
4
+not use this file except in compliance with the License. You may obtain
5
+copy of the License at
6
+
7
+http://www.apache.org/licenses/LICENSE-2.0
8
+
9
+Unless required by applicable law or agreed to in writing, software
10
+distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11
+WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12
+License for the specific language governing permissions and limitations
13
+under the License.
14
+"""
15
+
16
+from devops.error import TimeoutError
17
+from devops.helpers.helpers import wait
18
+from proboscis import test
19
+from proboscis.asserts import assert_true
20
+
21
+from fuelweb_test.helpers import os_actions
22
+from fuelweb_test.helpers.decorators import log_snapshot_after_test
23
+from fuelweb_test.settings import DEPLOYMENT_MODE
24
+from fuelweb_test.settings import SERVTEST_PASSWORD
25
+from fuelweb_test.settings import SERVTEST_TENANT
26
+from fuelweb_test.settings import SERVTEST_USERNAME
27
+from fuelweb_test.tests.base_test_case import SetupEnvironment
28
+from tests.base_plugin_test import TestNSXtBase
29
+from helpers import openstack as os_help
30
+
31
+
32
+@test(groups=['nsxt_plugin', 'nsxt_system'])
33
+class TestNSXtSystem(TestNSXtBase):
34
+    """Tests from test plan that have been marked as 'Automated'."""
35
+
36
+    _tenant = None  # default tenant
37
+
38
+    def _create_net(self, os_conn, name):
39
+        """Create network in default tenant."""
40
+        if not self._tenant:
41
+            self._tenant = os_conn.get_tenant(SERVTEST_TENANT)
42
+
43
+        return os_conn.create_network(
44
+            network_name=name, tenant_id=self._tenant.id)['network']
45
+
46
+    @test(depends_on=[SetupEnvironment.prepare_slaves_5],
47
+          groups=['nsxt_setup_system'])
48
+    @log_snapshot_after_test
49
+    def nsxt_setup_system(self):
50
+        """Set up for system tests.
51
+
52
+        Scenario:
53
+            1. Install NSX-T plugin to Fuel Master node with 5 slaves.
54
+            2. Create new environment with the following parameters:
55
+                * Compute: KVM, QEMU with vCenter
56
+                * Networking: Neutron with NSX-T plugin
57
+                * Storage: default
58
+                * Additional services: default
59
+            3. Add nodes with following roles:
60
+                * Controller
61
+                * Compute-vmware
62
+                * Compute
63
+                * Compute
64
+            4. Configure interfaces on nodes.
65
+            5. Enable and configure NSX-T plugin, configure network settings.
66
+            6. Configure VMware vCenter Settings. Add 2 vSphere clusters,
67
+               configure Nova Compute instances on controller and
68
+               compute-vmware.
69
+            7. Verify networks.
70
+            8. Deploy cluster.
71
+            9. Run OSTF.
72
+
73
+        Duration: 120 min
74
+        """
75
+        self.show_step(1)  # Install plugin to Fuel Master node with 5 slaves
76
+        self.env.revert_snapshot('ready_with_5_slaves')
77
+        self.install_nsxt_plugin()
78
+
79
+        self.show_step(2)  # Create new environment with vCenter
80
+        cluster_id = self.fuel_web.create_cluster(
81
+            name=self.__class__.__name__,
82
+            mode=DEPLOYMENT_MODE,
83
+            settings=self.default.cluster_settings,
84
+            configure_ssl=False)
85
+
86
+        self.show_step(3)  # Add nodes
87
+        self.fuel_web.update_nodes(cluster_id,
88
+                                   {'slave-01': ['controller'],
89
+                                    'slave-02': ['compute-vmware'],
90
+                                    'slave-03': ['compute'],
91
+                                    'slave-04': ['compute']})
92
+
93
+        self.show_step(4)  # Configure interfaces on nodes
94
+        self.reconfigure_cluster_interfaces(cluster_id)
95
+
96
+        self.show_step(5)  # Enable and configure plugin, configure networks
97
+        self.enable_plugin(cluster_id)
98
+
99
+        # Configure VMware settings. 2 Cluster, 1 Nova Instance on controllers
100
+        # and 1 Nova Instance on compute-vmware
101
+        self.show_step(6)
102
+        target_node2 = self.fuel_web.get_nailgun_node_by_name('slave-02')
103
+        self.fuel_web.vcenter_configure(cluster_id,
104
+                                        target_node_2=target_node2['hostname'],
105
+                                        multiclusters=True)
106
+
107
+        self.show_step(7)  # Verify networks
108
+        self.fuel_web.verify_network(cluster_id)
109
+
110
+        self.show_step(8)  # Deploy cluster
111
+        self.fuel_web.deploy_cluster_wait(cluster_id)
112
+
113
+        self.show_step(9)  # Run OSTF
114
+        self.fuel_web.run_ostf(cluster_id)
115
+
116
+        self.env.make_snapshot("nsxt_setup_system", is_make=True)
117
+
118
+    @test(depends_on=[nsxt_setup_system],
119
+          groups=['nsxt_manage_ports'])
120
+    @log_snapshot_after_test
121
+    def nsxt_manage_ports(self):
122
+        """Check ability to bind port on NSX to VM, disable and enable it.
123
+
124
+        Scenario:
125
+            1. Set up for system tests.
126
+            2. Get access to OpenStack.
127
+            3. Launch two instances in default network. Instances should belong
128
+               to different az (nova and vcenter).
129
+            4. Check that instances can communicate with each other.
130
+            5. Disable port attached to instance in nova az.
131
+            6. Check that instances can't communicate with each other.
132
+            7. Enable port attached to instance in nova az.
133
+            8. Check that instances can communicate with each other.
134
+            9. Disable port attached to instance in vcenter az.
135
+            10. Check that instances can't communicate with each other.
136
+            11. Enable port attached to instance in vcenter az.
137
+            12. Check that instances can communicate with each other.
138
+            13. Delete created instances.
139
+
140
+        Duration: 30 min
141
+        """
142
+        self.show_step(1)  # Set up for system tests
143
+        self.env.revert_snapshot('nsxt_setup_system')
144
+
145
+        self.show_step(2)  # Get access to OpenStack
146
+        cluster_id = self.fuel_web.get_last_created_cluster()
147
+
148
+        os_conn = os_actions.OpenStackActions(
149
+            self.fuel_web.get_public_vip(cluster_id),
150
+            SERVTEST_USERNAME,
151
+            SERVTEST_PASSWORD,
152
+            SERVTEST_TENANT)
153
+
154
+        # Launch two instances in default network. Instances should belong to
155
+        # different az (nova and vcenter)
156
+        self.show_step(3)
157
+        sg = os_conn.create_sec_group_for_ssh().name
158
+        vm1 = os_help.create_instance(os_conn, sg_names=[sg])
159
+        vm2 = os_help.create_instance(os_conn, az='vcenter', sg_names=[sg])
160
+
161
+        # Check that instances can communicate with each other
162
+        self.show_step(4)
163
+        default_net = os_conn.nova.networks.find(
164
+            label=self.default.PRIVATE_NET)
165
+
166
+        vm1_fip = os_conn.assign_floating_ip(vm1)
167
+        vm2_fip = os_conn.assign_floating_ip(vm2)
168
+
169
+        vm1_ip = os_conn.get_nova_instance_ip(vm1, net_name=default_net)
170
+        vm2_ip = os_conn.get_nova_instance_ip(vm2, net_name=default_net)
171
+
172
+        os_help.check_connection_vms({vm1_fip: [vm2_ip], vm2_fip: [vm1_ip]})
173
+
174
+        self.show_step(5)  # Disable port attached to instance in nova az
175
+        port = os_conn.neutron.list_ports(device_id=vm1.id)['ports'][0]['id']
176
+        os_conn.neutron.update_port(port, {'port': {'admin_state_up': False}})
177
+
178
+        # Check that instances can't communicate with each other
179
+        self.show_step(6)
180
+        os_help.check_connection_vms({vm2_fip: [vm1_ip]}, result_of_command=1)
181
+
182
+        self.show_step(7)  # Enable port attached to instance in nova az
183
+        os_conn.neutron.update_port(port, {'port': {'admin_state_up': True}})
184
+
185
+        # Check that instances can communicate with each other
186
+        self.show_step(8)
187
+        os_help.check_connection_vms({vm1_fip: [vm2_ip], vm2_fip: [vm1_ip]})
188
+
189
+        self.show_step(9)  # Disable port attached to instance in vcenter az
190
+        port = os_conn.neutron.list_ports(device_id=vm2.id)['ports'][0]['id']
191
+        os_conn.neutron.update_port(port, {'port': {'admin_state_up': False}})
192
+
193
+        # Check that instances can't communicate with each other
194
+        self.show_step(10)
195
+        os_help.check_connection_vms({vm1_fip: [vm2_ip]}, result_of_command=1)
196
+
197
+        self.show_step(11)  # Enable port attached to instance in vcenter az
198
+        os_conn.neutron.update_port(port, {'port': {'admin_state_up': True}})
199
+
200
+        # Check that instances can communicate with each other
201
+        self.show_step(12)
202
+        os_help.check_connection_vms({vm1_fip: [vm2_ip], vm2_fip: [vm1_ip]})
203
+
204
+        self.show_step(13)  # Delete created instances
205
+        vm1.delete()
206
+        vm2.delete()
207
+
208
+    @test(depends_on=[nsxt_setup_system],
209
+          groups=['nsxt_hot'])
210
+    @log_snapshot_after_test
211
+    def nsxt_hot(self):
212
+        """Deploy HOT.
213
+
214
+        Scenario:
215
+            1. Deploy cluster with NSX-t.
216
+            2. On controller node create teststack with nsxt_stack.yaml.
217
+            3. Wait for status COMPLETE.
218
+            4. Run OSTF.
219
+
220
+        Duration: 30 min
221
+        """
222
+        template_path = 'plugin_test/test_templates/nsxt_stack.yaml'
223
+
224
+        self.show_step(1)  # Deploy cluster with NSX-t
225
+        self.env.revert_snapshot("nsxt_setup_system")
226
+
227
+        # # On controller node create teststack with nsxt_stack.yaml
228
+        self.show_step(2)
229
+        cluster_id = self.fuel_web.get_last_created_cluster()
230
+        os_conn = os_actions.OpenStackActions(
231
+            self.fuel_web.get_public_vip(cluster_id),
232
+            SERVTEST_USERNAME,
233
+            SERVTEST_PASSWORD,
234
+            SERVTEST_TENANT)
235
+
236
+        with open(template_path) as f:
237
+            template = f.read()
238
+
239
+        stack_id = os_conn.heat.stacks.create(
240
+            stack_name='nsxt_stack',
241
+            template=template,
242
+            disable_rollback=True
243
+        )['stack']['id']
244
+
245
+        self.show_step(3)  # Wait for status COMPLETE
246
+        expect_state = 'CREATE_COMPLETE'
247
+        try:
248
+            wait(lambda:
249
+                 os_conn.heat.stacks.get(stack_id).stack_status ==
250
+                 expect_state, timeout=60 * 5)
251
+        except TimeoutError:
252
+            current_state = os_conn.heat.stacks.get(stack_id).stack_status
253
+            assert_true(current_state == expect_state,
254
+                        'Timeout is reached. Current state of stack '
255
+                        'is {}'.format(current_state))
256
+
257
+        self.show_step(4)  # Run OSTF
258
+        self.fuel_web.run_ostf(cluster_id)

Loading…
Cancel
Save