Controller Node |
- Runs the Networking service, Identity, and
- all of the Compute services that are required
- to deploy VMs (nova-api, nova-scheduler, for
- example). The node must have at least one
- network interface, which is connected to the
- Management Network. The host name is
- controlnode, which every other node resolves
- to the IP of the controller node.
- The nova-network service
- should not be running. This is replaced by
- Networking.
+ Runs the Networking service, Identity, and all of the Compute services that are required to
+ deploy VMs (nova-api, nova-scheduler, for example). The node must
+ have at least one network interface, which is connected to the Management
+ Network. The host name is controlnode, which every other node resolves to
+ the IP of the controller node.
+ The nova-network service
+ should not be running. This is replaced by Networking.
|
|
Compute Node |
- Runs the Networking L2 agent and the Compute
- services that run VMs (nova-compute specifically, and
- optionally other nova-* services depending on
- configuration). The node must have at least two
- network interfaces. One interface communicates
- with the controller node through the management
- network. The other node is used for the VM traffic
- on the data network. The VM receives its IP
- address from the DHCP agent on this network. |
+ Runs the Networking L2 agent and the Compute services that run VMs (nova-compute specifically, and optionally other
+ nova-* services depending on
+ configuration). The node must have at least two network interfaces. One
+ interface communicates with the controller node through the management network.
+ The other node is used for the VM traffic on the data network. The VM receives
+ its IP address from the DHCP agent on this network. |
Network Node |
- Runs Networking L2 agent, DHCP agent and L3 agent.
- This node has access to the external network. The
- DHCP agent allocates IP addresses to the VMs on
- data network. (Technically, the addresses are
- allocated by the Networking server, and
- distributed by the dhcp agent.) The node must have
- at least two network interfaces. One interface
- communicates with the controller node through the
- management network. The other interface is used as
- external network. GRE tunnels are set up as data
- networks. |
+ Runs Networking L2 agent, DHCP agent and L3 agent. This node has access to the
+ external network. The DHCP agent allocates IP addresses to the VMs on data
+ network. (Technically, the addresses are allocated by the Networking server, and
+ distributed by the dhcp agent.) The node must have at least two network
+ interfaces. One interface communicates with the controller node through the
+ management network. The other interface is used as external network. GRE tunnels
+ are set up as data networks. |
Router |
@@ -122,62 +106,49 @@
Controller node
- Relevant Compute services are installed, configured,
- and running.
+ Relevant Compute services are installed, configured, and running.
Glance is installed, configured, and running. In
addition, an image named tty must be present.
- Identity is installed, configured, and running. A
- Networking user named neutron should be created on tenant
- service with
- password NEUTRON_PASS.
+ Identity is installed, configured, and running. A Networking user named neutron should be created on tenant service with password NEUTRON_PASS.
Additional services:
- RabbitMQ is running with default guest
- and its password
+ RabbitMQ is running with default guest and its password
-
- MySQL server (user is root and
- password is root)
+
+ MySQL server (user is root and
+ password is root)
Compute node
- Compute is installed and configured.
+ Compute is installed and configured.
Install
- Controller
- nodeNetworking server
+ Controller nodeNetworking server
- Install the Networking
- server.
+ Install the Networking server.
-
- Create database ovs_neutron.
+
+ Create database ovs_neutron.
- Update the Networking
- configuration file,
- /etc/neutron/neutron.conf,
- with plug-in choice and Identity
- Service user as necessary:
+ Update the Networking configuration file,
+ /etc/neutron/neutron.conf, with plug-in choice
+ and Identity Service user as necessary:
[DEFAULT]
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
control_exchange = neutron
@@ -195,30 +166,24 @@ rabbit_host = controller
notification_driver = neutron.openstack.common.notifier.rabbit_notifier
-
- Update the plug-in configuration
- file,
- /etc/neutron/plugins/ml2/ml2_conf.ini:
+
+ Update the plug-in configuration file,
+ /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini:
[database]
-
connection = mysql://root:root@controlnode:3306/ovs_neutron?charset=utf8
+
[ovs]
tenant_network_type = gre
-[ml2_type_gre]
tunnel_id_ranges = 1:1000
+enable_tunneling = True
-
- Start the Networking
- server
- The Networking server can be a
- service of the operating system.
- The command to start the service
- depends on your operating system.
- The following command runs the
- Networking server directly:
+
+ Start the Networking server
+ The Networking server can be a service of the operating
+ system. The command to start the service depends on your
+ operating system. The following command runs the Networking
+ server directly:
# neutron-server --config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini \
--config-file /etc/neutron/neutron.conf
@@ -230,12 +195,9 @@ tunnel_id_ranges = 1:1000
Install Compute services.
- Update the Compute configuration
- file,
- /etc/nova/nova.conf.
- Make sure the following line
- appears at the end of this
- file:
+ Update the Compute configuration file,
+ /etc/nova/nova.conf. Make sure the following line
+ appears at the end of this file:
network_api_class=nova.network.neutronv2.api.API
neutron_admin_username=neutron
@@ -249,165 +211,137 @@ libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
- Restart relevant Compute
- services.
+ Restart relevant Compute services.
- Compute and Network
- nodeL2 agent
+ Compute and Network nodeL2 agent
- Install and start Open
- vSwitch.
+ Install and start Open vSwitch.
- Install the L2 agent (Neutron
- Open vSwitch agent).
+ Install the L2 agent (Neutron Open vSwitch agent).
- Add the integration bridge to
- the Open vSwitch:
+ Add the integration bridge to the Open vSwitch:
# ovs-vsctl add-br br-int
- Update the Networking
- configuration file,
- /etc/neutron/neutron.conf:
+ Update the Networking configuration file,
+ /etc/neutron/neutron.conf:
[DEFAULT]
-core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
+core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
control_exchange = neutron
rabbit_host = controller
notification_driver = neutron.openstack.common.notifier.rabbit_notifier
- Update the plug-in configuration
- file,
- /etc/neutron/plugins/ml2/ml2_conf.ini.
- Compute Node:
+ Update the plug-in configuration file,
+ /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini.
+ Compute node:
[database]
-connection = mysql://root:root@controlnode:3306/neutron_ml2?charset=utf8
-[ml2]
-tenant_network_type = gre
-[ml2_type_gre]
-tunnel_id_ranges = 1:1000
+connection = mysql://root:root@controlnode:3306/ovs_neutron?charset=utf8
[ovs]
+tenant_network_type = gre
+tunnel_id_ranges = 1:1000
+enable_tunneling = True
local_ip = 9.181.89.202
Network node:
[database]
-connection = mysql://root:root@controlnode:3306/neutron_ml2?charset=utf8
-[ml2]
-tenant_network_type = gre
-[ml2_type_gre]
-tunnel_id_ranges = 1:1000
+connection = mysql://root:root@controlnode:3306/ovs_neutron?charset=utf8
[ovs]
+tenant_network_type = gre
+tunnel_id_ranges = 1:1000
+enable_tunneling = True
local_ip = 9.181.89.203
- Create the integration bridge
- br-int:
+ Create the integration bridge br-int:
# ovs-vsctl --may-exist add-br br-int
- Start the Networking L2
- agent
+ Start the Networking L2 agent
The Networking Open vSwitch L2
- agent can be a service of operating
- system. The command to start
- depends on your operating systems.
- The following command runs the
- service directly:
+ agent can be a service of operating
+ system. The command to start depends
+ on your operating systems. The following command
+ runs the service directly:
+
# neutron-openvswitch-agent --config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini \
--config-file /etc/neutron/neutron.conf
- Network nodeDHCP
- agent
+ Network nodeDHCP agent
Install the DHCP agent.
- Update the Networking
- configuration file,
- /etc/neutron/neutron.conf
+ Update the Networking configuration file,
+ /etc/neutron/neutron.conf
[DEFAULT]
-core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
+core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
control_exchange = neutron
rabbit_host = controller
notification_driver = neutron.openstack.common.notifier.rabbit_notifier
allow_overlapping_ips = True
Set
- allow_overlapping_ips
- because TenantA and TenantC use
- overlapping
- subnets.
+ allow_overlapping_ips because TenantA
+ and TenantC use overlapping subnets.
- Update the DHCP configuration
- file
- /etc/neutron/dhcp_agent.ini
+ Update the DHCP configuration file
+ /etc/neutron/dhcp_agent.ini
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
Start the DHCP agent.
- The Networking DHCP agent can be
- a service of operating system. The
- command to start the service
- depends on your operating system.
- The following command runs the
- service directly:
+ The Networking DHCP agent can be a service of operating
+ system. The command to start the service depends on your
+ operating system. The following command runs the service
+ directly:
# neutron-dhcp-agent --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/dhcp_agent.ini
- Network nodeL3
- agent
+ Network nodeL3 agent
Install the L3 agent.
- Add the external network
- bridge
+ Add the external network bridge
# ovs-vsctl add-br br-ex
- Add the physical interface, for
- example eth0, that is connected to
- the outside network to this
- bridge:
+ Add the physical interface, for example eth0, that is
+ connected to the outside network to this bridge:
# ovs-vsctl add-port br-ex eth0
- Update the L3 configuration file
-
- /etc/neutron/l3_agent.ini:
+ Update the L3 configuration file
+ /etc/neutron/l3_agent.ini:
[DEFAULT]
interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces=True
Set the
- use_namespaces
- option (it is True by default)
- because TenantA and TenantC have
- overlapping subnets, and the
- routers are hosted on one l3 agent
- network node.
+ use_namespaces option (it is True by
+ default) because TenantA and TenantC have overlapping
+ subnets, and the routers are hosted on one l3 agent network
+ node.
Start the L3 agent
- The Networking L3 agent can be a
- service of operating system. The
- command to start the service
- depends on your operating system.
- The following command starts the
- agent directly:
+ The Networking L3 agent can be a service of operating system.
+ The command to start the service depends on your operating
+ system. The following command starts the agent directly:
# neutron-l3-agent --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/l3_agent.ini
@@ -421,9 +355,8 @@ use_namespaces=True
All of the commands below can be executed on the network
node.
- Ensure that the following environment variables are
- set. These are used by the various clients to access
- the Identity service.
+ Ensure that the following environment variables are set. These are used by the
+ various clients to access the Identity service.
export OS_USERNAME=admin
@@ -434,8 +367,7 @@ use_namespaces=True
- Get the tenant ID (Used as $TENANT_ID
- later):
+ Get the tenant ID (Used as $TENANT_ID later):
# keystone tenant-list
+----------------------------------+---------+---------+
| id | name | enabled |
@@ -503,19 +435,14 @@ use_namespaces=True
+------------------+--------------------------------------------+
- provider:network_type
- local means that Networking
- does not have to realize this network
- through provider network.
- router:external
- true means that an external
- network is created where you can create
- floating IP and router gateway
+ provider:network_type local means that Networking
+ does not have to realize this network through provider network.
+ router:external true means that an external
+ network is created where you can create floating IP and router gateway
port.
- Add an IP on external network to
- br-ex.
+ Add an IP on external network to br-ex.
Because br-ex is the external network
bridge, add an IP 30.0.0.100/24 to br-ex and
ping the floating IP of the VM from our
@@ -571,8 +498,7 @@ use_namespaces=True
1.
- Create a subnet on the network
- TenantA-Net:
+ Create a subnet on the network TenantA-Net:
#
neutron --os-tenant-name TenantA --os-username UserA --os-password password \
--os-auth-url=http://localhost:5000/v2.0 subnet-create TenantA-Net 10.0.0.0/24
@@ -637,15 +563,13 @@ neutron --os-tenant-name TenantA --os-username UserA --os-password password \
# neutron --os-tenant-name TenantA --os-username UserA --os-password password \
--os-auth-url=http://localhost:5000/v2.0 router-interface-add \
TenantA-R1 51e2c223-0492-4385-b6e9-83d4e6d10657
- Added interface to router
- TenantA-R1
- # neutron --os-tenant-name TenantA --os-username UserA --os-password password \
+ Added interface to router TenantA-R1
+ # neutron --os-tenant-name TenantA --os-username UserA --os-password password \
--os-auth-url=http://localhost:5000/v2.0 \
router-gateway-set TenantA-R1 Ext-Net
-
-
- Associate a floating IP for
- TenantA_VM1.
+
+
+ Associate a floating IP for TenantA_VM1.
1. Create a floating IP:
# neutron --os-tenant-name TenantA --os-username UserA --os-password password \
--os-auth-url=http://localhost:5000/v2.0 floatingip-create Ext-Net
@@ -673,8 +597,7 @@ neutron --os-tenant-name TenantA --os-username UserA --os-password password \
| 6071d430-c66e-4125-b972-9a937c427520 | | fa:16:3e:a0:73:0d | {"subnet_id": "51e2c223-0492-4385-b6e9-83d4e6d10657", "ip_address": "10.0.0.3"} |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
- 3. Associate the floating IP with
- the VM port:
+ 3. Associate the floating IP with the VM port:
$ neutron --os-tenant-name TenantA --os-username UserA --os-password password \
--os-auth-url=http://localhost:5000/v2.0 floatingip-associate \
5a1f90ed-aa3c-4df3-82cb-116556e96bf1 6071d430-c66e-4125-b972-9a937c427520
@@ -689,8 +612,7 @@ neutron --os-tenant-name TenantA --os-username UserA --os-password password \
- Ping the public network from the
- server of TenantA.
+ Ping the public network from the server of TenantA.
In my environment, 192.168.1.0/24 is
my public network connected with my
physical router, which also connects
@@ -710,8 +632,7 @@ rtt min/avg/max/mdev = 1.234/1.495/1.745/0.211 ms
- Ping floating IP of the TenantA's
- server:
+ Ping floating IP of the TenantA's server:
$ ping 30.0.0.2
PING 30.0.0.2 (30.0.0.2) 56(84) bytes of data.
64 bytes from 30.0.0.2: icmp_req=1 ttl=63 time=45.0 ms
@@ -724,8 +645,7 @@ rtt min/avg/max/mdev = 0.898/15.621/45.027/20.793 ms
- Create other servers for
- TenantA.
+ Create other servers for TenantA.
We can create more servers for
TenantA and add floating IPs for
them.
@@ -741,8 +661,7 @@ rtt min/avg/max/mdev = 0.898/15.621/45.027/20.793 ms
IPs.
- Create networks and subnets for
- TenantC:
+ Create networks and subnets for TenantC:
# neutron --os-tenant-name TenantC --os-username UserC --os-password password \
--os-auth-url=http://localhost:5000/v2.0 net-create TenantC-Net1
# neutron --os-tenant-name TenantC --os-username UserC --os-password password \
@@ -798,15 +717,13 @@ rtt min/avg/max/mdev = 0.898/15.621/45.027/20.793 ms
them to create VMs and router.
- Create a server TenantC-VM1 for
- TenantC on TenantC-Net1.
+ Create a server TenantC-VM1 for TenantC on TenantC-Net1.
# nova --os-tenant-name TenantC --os-username UserC --os-password password \
--os-auth-url=http://localhost:5000/v2.0 boot --image tty --flavor 1 \
--nic net-id=91309738-c317-40a3-81bb-bed7a3917a85 TenantC_VM1
- Create a server TenantC-VM3 for
- TenantC on TenantC-Net2.
+ Create a server TenantC-VM3 for TenantC on TenantC-Net2.
# nova --os-tenant-name TenantC --os-username UserC --os-password password \
--os-auth-url=http://localhost:5000/v2.0 boot --image tty --flavor 1 \
--nic net-id=5b373ad2-7866-44f4-8087-f87148abd623 TenantC_VM3
@@ -827,13 +744,10 @@ rtt min/avg/max/mdev = 0.898/15.621/45.027/20.793 ms
will use them later.
- Make sure servers get their
- IPs.
- We can use VNC to log on the VMs to
- check if they get IPs. If not, we have
- to make sure the Networking components
- are running right and the GRE tunnels
- work.
+ Make sure servers get their IPs.
+ We can use VNC to log on the VMs to check if they get IPs. If not,
+ we have to make sure the Networking components are running right and
+ the GRE tunnels work.
Create and configure a router for
@@ -846,13 +760,17 @@ rtt min/avg/max/mdev = 0.898/15.621/45.027/20.793 ms
# neutron --os-tenant-name TenantC --os-username UserC --os-password password \
--os-auth-url=http://localhost:5000/v2.0 router-interface-add \
TenantC-R1 38f0b2f0-9f98-4bf6-9520-f4abede03300
- # neutron --os-tenant-name TenantC --os-username UserC --os-password password \
+ # neutron --os-tenant-name TenantC --os-username UserC --os-password password \
--os-auth-url=http://localhost:5000/v2.0 \
router-gateway-set TenantC-R1 Ext-Net
-
-
- Checkpoint: ping from within
- TenantC's servers.
+
+
+ Checkpoint: ping from within TenantC's servers.
+ Since we have a router connecting to two subnets, the VMs on these subnets are able to ping each other.
+ And since we have set the router's gateway interface, TenantC's servers are able to ping external network IPs, such as 192.168.1.1, 30.0.0.1 etc.
+
+
+ Associate floating IPs for TenantC's servers.
Since we have a router connecting to
two subnets, the VMs on these subnets
are able to ping each other. And since
@@ -862,19 +780,7 @@ rtt min/avg/max/mdev = 0.898/15.621/45.027/20.793 ms
192.168.1.1, 30.0.0.1 etc.
- Associate floating IPs for TenantC's
- servers.
- Since we have a router connecting to
- two subnets, the VMs on these subnets
- are able to ping each other. And since
- we have set the router's gateway
- interface, TenantC's servers are able
- to ping external network IPs, such as
- 192.168.1.1, 30.0.0.1 etc.
-
-
- Associate floating IPs for TenantC's
- servers.
+ Associate floating IPs for TenantC's servers.
We can use the similar commands as
we used in TenantA's section to finish
this task.
@@ -885,26 +791,20 @@ rtt min/avg/max/mdev = 0.898/15.621/45.027/20.793 ms
- Use case: per-tenant routers with private
- networks
- This use case represents a more advanced router scenario
- in which each tenant gets at least one router, and
- potentially has access to the Networking API to create
- additional routers. The tenant can create their own
- networks, potentially uplinking those networks to a
- router. This model enables tenant-defined, multi-tier
- applications, with each tier being a separate network
- behind the router. Since there are multiple routers,
- tenant subnets can overlap without conflicting, since
- access to external networks all happens via SNAT or
- Floating IPs. Each router uplink and floating IP is
- allocated from the external network subnet.
+ Use case: per-tenant routers with private networks
+ This use case represents a more advanced router scenario in which each tenant gets at
+ least one router, and potentially has access to the Networking API to create additional
+ routers. The tenant can create their own networks, potentially uplinking those networks
+ to a router. This model enables tenant-defined, multi-tier applications, with each tier
+ being a separate network behind the router. Since there are multiple routers, tenant
+ subnets can overlap without conflicting, since access to external networks all happens
+ via SNAT or Floating IPs. Each router uplink and floating IP is allocated from the
+ external network subnet.
+ fileref="../common/figures/UseCase-MultiRouter.png" align="left"/>
diff --git a/doc/install-guide/section_neutron-provider-router-with-private_networks.xml b/doc/install-guide/section_neutron-provider-router-with-private_networks.xml
index 92ea0bc35c..10be3626de 100644
--- a/doc/install-guide/section_neutron-provider-router-with-private_networks.xml
+++ b/doc/install-guide/section_neutron-provider-router-with-private_networks.xml
@@ -96,7 +96,7 @@
Edit file /etc/neutron/neutron.conf
and modify:
-core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
+core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
auth_strategy = keystone
fake_rabbit = False
rabbit_password = guest
@@ -104,13 +104,12 @@ rabbit_password = guest
Edit file
- /etc/neutron/plugins/ml2/ml2_conf.ini
+ /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
and modify:
[database]
connection = mysql://neutron:NEUTRON_DBPASS@localhost:3306/neutron
-[ml2]
+[ovs]
tenant_network_type = vlan
-[ml2_type_vlan]
network_vlan_ranges = physnet1:100:2999
@@ -166,15 +165,13 @@ rabbit_host = controller
Update the plug-in configuration file,
- /etc/neutron/plugins/ml2/ml2_conf.ini
- :
+ /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
+ :
[database]
connection = mysql://neutron:NEUTRON_DBPASS@controller:3306/neutron
-[ml2]
-tenant_network_type=vlan
-[ml2_type_vlan]
-network_vlan_ranges = physnet1:1:4094
[ovs]
+tenant_network_type=vlan
+network_vlan_ranges = physnet1:1:4094
bridge_mappings = physnet1:br-eth1
@@ -281,14 +278,12 @@ rabbit_host = controller
Update the file
- /etc/neutron/plugins/ml2/ml2_conf.ini:
+ /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini:
[database]
connection = mysql://neutron:NEUTRON_DBPASS@controller:3306/neutron
-[ml2]
-tenant_network_type = vlan
-[ml2_type_vlan]
-network_vlan_ranges = physnet1:1:4094
[ovs]
+tenant_network_type = vlan
+network_vlan_ranges = physnet1:1:4094
bridge_mappings = physnet1:br-eth1
diff --git a/doc/install-guide/section_neutron-single-flat.xml b/doc/install-guide/section_neutron-single-flat.xml
index 66623c7bcf..fba4ae6469 100644
--- a/doc/install-guide/section_neutron-single-flat.xml
+++ b/doc/install-guide/section_neutron-single-flat.xml
@@ -88,7 +88,7 @@
The demo assumes the following prerequisites:
Controller node
-
+
Relevant Compute services are installed, configured,
and running.
@@ -119,13 +119,13 @@
-
+
Compute node
-
+
Compute is installed and configured.
-
+
Install
@@ -162,6 +162,7 @@ core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
control_exchange = neutron
rabbit_host = controller
notification_driver = neutron.openstack.common.notifier.rabbit_notifier
+
[keystone_authtoken]
admin_tenant_name=service
admin_user=neutron
@@ -176,7 +177,6 @@ admin_password=NEUTRON_PASS
connection = mysql://root:root@controller:3306/ovs_neutron?charset=utf8
[ovs]
network_vlan_ranges = physnet1
-[ovs]
bridge_mappings = physnet1:br-eth0
@@ -200,12 +200,14 @@ bridge_mappings = physnet1:br-eth0
following line is at the end of the
file:
network_api_class=nova.network.neutronv2.api.API
+
neutron_admin_username=neutron
neutron_admin_password=NEUTRON_PASS
neutron_admin_auth_url=http://controller:35357/v2.0/
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_url=http://controller:9696/
+
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
@@ -248,7 +250,6 @@ notification_driver = neutron.openstack.common.notifier.rabbit_notifier
@@ -437,14 +438,14 @@ rtt min/avg/max/mdev = 1.234/1.495/1.745/0.211 ms
outside world. For each subnet on an external network, the
gateway configuration on the physical router must be
manually configured outside of OpenStack.
-
-
-
-
-
-
+
+
+
+
+
+
@@ -453,14 +454,14 @@ rtt min/avg/max/mdev = 1.234/1.495/1.745/0.211 ms
network use case, except that tenants can see multiple
shared networks via the Networking API and can choose
which network (or networks) to plug into.
-
-
-
-
-
-
+
+
+
+
+
+
Use case: mixed flat and private network