Run DevStack in the gate
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

multinode_setup_info.txt 9.8KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149
  1. The basic requirement we have for running multinode openstack tests is that
  2. tempest must be able to ssh and ping the nested VMs booted by openstack and
  3. these nested VMs need to be able to talk to each other. This is due to how
  4. the tests are run by tempest.
  5. We run devstack-gate on multiple public clouds. In order to meet the above
  6. requirement we need some control over l2 and l3 networking in the test envs,
  7. but not all of our clouds provide this control. To work around this we setup
  8. overlay networks across the VMs using software bridges and tunnels between
  9. the hosts. This provides routing for the floating IP network between tempest
  10. and VMs and between VMs themselves.
  11. To map against a real deployment the overlay networks would be the networking
  12. provided by your datacenter for OpenStack and the existing eth0 for each test
  13. node is a management interface or ILO. We just have to set up our own
  14. datacenter networking because we are running in clouds.
  15. Some useful IP ranges:
  16. 172.24.4.0/23 This is our "public" IP range. Test nodes get IPs in the first
  17. half of this subnet.
  18. 172.24.5.0/24 This is our floating IP range. VMs get assigned floating IPs
  19. from this range. The test nodes know how to "route" to these VMs due to the
  20. interfaces on 172.24.4.0/23.
  21. Now to network solution specifics. Nova network and neutron are different
  22. enough that they deserve their own specific documentation below.
  23. Nova Network
  24. ============
  25. Subnode1 Primary Node Subnode2
  26. +--------------------------+ +--------------------------+ +--------------------------+
  27. | | | | | |
  28. | | | | | |
  29. | | | | | |
  30. |172.24.4.2/23 | |172.24.4.1/23 | |172.24.4.3/23 |
  31. |+------+ +--------+ | |+-------+ +-------+ | |+-------+ +-------+ |
  32. ||br_pub| | br_flat| | ||br_pub | |br_flat| | ||br_pub | |br_flat| |
  33. |+--+---+ +---+----+ | |+---+---+ +---+---+ | |+---+---+ +---+---+ |
  34. | | | | | | | | | | | |
  35. | | +------------------vxlan-tunnel-+-----------------vxlan-tunnel-+ |
  36. | | | | | | | | |
  37. | +--------vxlan-tunnel-----------+--------vxlan-tunnel----------+ |
  38. | | | | | |
  39. +--------------------------+ +--------------------------+ +--------------------------+
  40. In addition to the requirement for floating IP connectivity above nova net
  41. also requires that the the private network for the VMs be shared so that the
  42. nested VMs can get access to nova services like dhcp and metadata. While not
  43. strictly necessary when using nova net multihost we support both and this
  44. allows nested VM to nested VM communication over private IP.
  45. In this setup we have two soft bridges on the primary node (the controller).
  46. The br_flat bridge handles the l2 traffic for the VMs private interfaces.
  47. The br_pub bridge is where floating IPs are configured and allows for test
  48. node to nested VM communication and nested VM to nested VM communication.
  49. We cannot share the l2 bridge for separate l3 communication because nova net
  50. uses ebtables to prevent public IPs from talking to private IPs and we lose
  51. packets on a shared bridge as a result.
  52. This is what it all looks like after you run devstack and boot some nodes.
  53. Subnode1 Primary Node Subnode2
  54. +--------------------------+ +--------------------------+ +--------------------------+
  55. | +--+ +-----+ | | +--+ +-----+ | | +--+ +-----+ |
  56. | |vm|---------|br100| | | |vm|----------|br100| | | |vm|----------|br100| |
  57. | +--+ +-----+ | | +--+ +-----+ | | +--+ +-----+ |
  58. | | | | | | | | | | | |
  59. |172.25.5.1/24 | | |172.25.5.2/24 | | |172.25.5.3/24 | |
  60. |172.24.4.2/23 | | |172.24.4.1/23 | | |172.24.4.3/23 | |
  61. |+------+ +--------+ | |+-------+ +-------+ | |+-------+ +-------+ |
  62. ||br_pub| | br_flat| | ||br_pub | |br_flat| | ||br_pub | |br_flat| |
  63. |+--+---+ +---+----+ | |+---+---+ +---+---+ | |+---+---+ +---+---+ |
  64. | | | | | | | | | | | |
  65. | | +------------------vxlan-tunnel-+-----------------vxlan-tunnel-+ |
  66. | | | | | | | | |
  67. | +--------vxlan-tunnel-----------+--------vxlan-tunnel----------+ |
  68. | | | | | |
  69. +--------------------------+ +--------------------------+ +--------------------------+
  70. Neutron
  71. =======
  72. Neutron is a bit different and comes in two flavors. The base case is
  73. neutron without DVR. In this case all of the l3 networking runs on the
  74. primary node. The other case is with DVR where each test node handles
  75. l3 for the nested VMs running on that test node.
  76. For the non DVR case we don't need to do anything special. Devstack and
  77. neutron setup br-int between the nodes for us and all public floating
  78. IP traffic is backhauled over br-int to the primary node where br-ex
  79. exists. br-ex is created on the primary node as it is on the single node
  80. tests and all tempest to floating IP and nested VM to nested VM communication
  81. happens here.
  82. Subnode1 Primary Node Subnode2
  83. +--------------------------+ +--------------------------+ +--------------------------+
  84. | | | | | |
  85. | | | | | |
  86. | | | | | |
  87. |172.24.4.2/23 | |172.24.4.1/23 | |172.24.4.3/23 |
  88. |+------+ | |+-------+ | |+-------+ |
  89. ||br-ex | | ||br-ex | | ||br-ex | |
  90. |+--+---+ | |+---+---+ | |+---+---+ |
  91. | | | | | | | | |
  92. | | | | | | | | |
  93. | +--------vxlan-tunnel-----------+--------vxlan-tunnel----------+ |
  94. | | | | | |
  95. +--------------------------+ +--------------------------+ +--------------------------+
  96. The DVR case is a bit more complicated. Devstack and neutron still configure
  97. br-int for us so we don't need two overlay networks like with nova net, but
  98. do need an overlay for floating IP public networking due to our original
  99. requirements. If floating IPs are configured on arbitrary test nodes we need
  100. to know how to get packets to them.
  101. Neutron uses br-ex for the floating IP network; unfortunately, Devstack and
  102. neutron do not configure br-ex except for in the trivial detached from
  103. everything case described earlier. This means we have to configure br-ex
  104. ourselves and the simplest way to do that is to just make br-ex the overlay
  105. itself. Doing this allows neutron to work properly with nested VMs talking
  106. to nested VMs and it also allows the test nodes to talk to VMs over br-ex as
  107. well.
  108. This is what it all looks like after you run devstack and boot some nodes.
  109. Subnode1 Primary Node Subnode2
  110. +--------------------------+ +--------------------------+ +--------------------------+
  111. | +------+ | | +------+ | | +------+ |
  112. | |br-tun|--------tunnel---------|br-tun|--------tunnel---------|br-tun| |
  113. | +------+ | | +------+ | | +------+ |
  114. | |br-int| | | |br-int| | | |br-int| |
  115. | +------+ | | +------+ | | +------+ |
  116. | | | | | | | | |
  117. |172.24.4.2/23 +--+ | |172.24.4.1/23 +--+ | |172.24.4.3/23 +--+ |
  118. |172.24.5.1/24--NAT--|vm| | |172.24.5.2/24--NAT--|vm| | |172.24.5.3/24--NAT--|vm| |
  119. |+------+ +--+ | |+-------+ +--+ | |+-------+ +--+ |
  120. ||br-ex | | ||br-ex | | ||br-ex | |
  121. |+--+---+ | |+---+---+ | |+---+---+ |
  122. | | | | | | | | |
  123. | | | | | | | | |
  124. | +--------vxlan-tunnel-----------+--------vxlan-tunnel----------+ |
  125. | | | | | |
  126. +--------------------------+ +--------------------------+ +--------------------------+
  127. When DVR is enabled, agent_mode in l3_agent.ini for the primary node will be set to "dvr"
  128. and "dvr_snat" for any remaining subnodes. DVR HA jobs need 3 node setup with this
  129. configuration, where "dvr_snat" represents the network node with centralized SNAT,
  130. and "dvr" represents compute nodes. There should be at least 2 "dvr_snat" nodes.