kolla/dev/vagrant/Vagrantfile.custom.example
Andrei-Lucian Șerb 3b12b7b951 Attach external NIC to a NAT-Network if on Wi-Fi
On computers with wi-fi adapters, promiscuous mode on the VirtualBox (or
maybe other hypervisors as well) NICs does not work, which means the
default way of connecting the Neutron external interface to a bridged
adapter, will not allow communication to and from the Nova VMs over
floating IPs with any computer on the external network (except the host
computer) or with the wi-fi router. This means no ability to connect to
the Nova VMs and no internet access inside the Nova VMs.

According to VirtualBox documentation (excerpt): "Bridging to a wireless
interface is done differently from bridging to a wired interface,
because most wireless adapters do not support promiscuous mode. All
traffic has to use the MAC address of the host’s wireless adapter, and
therefore VirtualBox needs to replace the source MAC address in the
Ethernet header of an outgoing packet to make sure the reply will be
sent to the host interface. When VirtualBox sees an incoming packet with
a destination IP address that belongs to one of the virtual machine
adapters it replaces the destination MAC address in the Ethernet header
with the VM adapter’s MAC address and passes it on. VirtualBox examines
ARP and DHCP packets in order to learn the IP addresses of virtual
machines."

To fix this issue, a new flag has been introduced: WIFI. If true, the
default Vagrant public network is not created anymore. Instead, the 3rd
NIC will be connected to a NAT-Network named OSNetwork. The NAT-Network
has a virtual gateway, which will be used to communicate with the
external physical wi-fi router. Since Vagrant does not have a high-level
mechanism to attach an adapter to a NAT-Network, the code uses the
low-level Vagrant construct vm.customize which makes it provider
specific.

Promiscuous mode is now activated by default on the 3rd NIC.

The WIFI flag is false by default.

This commit only addresses VirtualBox, and it is currently unknown if
the problem described and fixed in this commit is present in other
hypervisors.

DocImpact
Closes-Bug: #1558766
Change-Id: I0b4dbbc562d87191b2179f47b634cdd6f6361a5e
Signed-off-by: Andrei-Lucian Șerb <lucian.serb@icloud.com>
2016-03-21 01:08:45 +02:00

93 lines
2.3 KiB
Ruby

# -*- mode: ruby -*-
# vi: set ft=ruby :
# This file is an example of Vagrant configuration.
# Copy it to Vagrantfile.custom and configure it to your liking to customize
# the Vagrant deployment. The Vagrantfile.custom file is sourced by the
# Vagrantfile, it has to be valid ruby code.
# Either libvirt or virtualbox
# PROVIDER = "libvirt"
# Either centos or ubuntu
# DISTRO = "centos"
# The libvirt graphics_ip used for each guest. Only applies if PROVIDER
# is libvirt.
# GRAPHICSIP = "127.0.0.1"
# The bootstrap.sh provision_script requires CentOS 7 or Ubuntu 15.10.
# Provisioning other boxes than the default ones may therefore
# require changes to bootstrap.sh.
# PROVISION_SCRIPT = "bootstrap.sh"
# PROVIDER_DEFAULTS = {
# libvirt: {
# centos: {
# base_image: "centos/7",
# bridge_interface: "virbr0",
# vagrant_shared_folder: "/home/vagrant/sync",
# sync_method: "nfs",
# kolla_path: "/home/vagrant/kolla"
# }
# },
# virtualbox: {
# centos: {
# base_image: "puppetlabs/centos-7.0-64-puppet",
# bridge_interface: "wlp3s0b1",
# vagrant_shared_folder: "/home/vagrant/sync",
# sync_method: "virtualbox",
# kolla_path: "/home/vagrant/kolla"
# },
# ubuntu: {
# base_image: "ubuntu/wily64",
# bridge_interface: "wlp3s0b1",
# vagrant_shared_folder: "/home/vagrant/sync",
# sync_method: "virtualbox",
# kolla_path: "/home/vagrant/kolla"
# }
# }
# }
# Whether the host network adapter is Wi-Fi.
# On VirtualBox, the user must first manually create a NAT-Network
# named OSNetwork. The default network CIDR must be changed.
# The Neutron external interface will be connected to this Network.
# WIFI = false
# Whether to do Multi-node or All-in-One deployment
# MULTINODE = false
# The following is only used when deploying in Multi-nodes
# NUMBER_OF_CONTROL_NODES = 3
# NUMBER_OF_COMPUTE_NODES = 1
# NUMBER_OF_STORAGE_NODES = 1
# NUMBER_OF_NETWORK_NODES = 1
# NODE_SETTINGS = {
# aio: {
# cpus: 4,
# memory: 4096
# },
# operator: {
# cpus: 1,
# memory: 1024
# },
# control: {
# cpus: 1,
# memory: 2048
# },
# compute: {
# cpus: 1,
# memory: 1024
# },
# storage: {
# cpus: 1,
# memory: 1024
# },
# network: {
# cpus: 1,
# memory: 1024
# }
# }