Provider router with private networks This section describes how to install the OpenStack Networking service and its components for a single router use case: a provider router with private networks. This figure shows the set up: Because you run the DHCP agent and L3 agent on one node, you must set use_namespaces to True (which is the default) in the configuration files for both agents. The configuration includes these nodes:
Nodes for use case
Node Description
Controller Runs the Networking service, Identity Service, and all Compute services that are required to deploy a VM. The service must have at least two network interfaces. The first should be connected to the Management Network to communicate with the compute and network nodes. The second interface should be connected to the API/public network.
Compute Runs Compute and the Networking L2 agent. This node does not have access the public network. The node must have a network interface that communicates with the controller node through the management network. The VM receives its IP address from the DHCP agent on this network.
Network Runs Networking L2 agent, DHCP agent, and L3 agent. This node has access to the public network. The DHCP agent allocates IP addresses to the VMs on the network. The L3 agent performs NAT and enables the VMs to access the public network. The node must have: A network interface that communicates with the controller node through the management network A network interface on the data network that manages VM traffic A network interface that connects to the external gateway on the network
Install
Controller To install and configure the controller node Run this command: # apt-get install neutron-server # yum install openstack-neutron # zypper install openstack-neutron Configure Networking services: Edit the /etc/neutron/neutron.conf file and add these lines: core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 auth_strategy = keystone fake_rabbit = False rabbit_password = RABBIT_PASS [database] connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron Edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file and add these lines: [ovs] tenant_network_type = vlan network_vlan_ranges = physnet1:100:2999 Edit the /etc/neutron/api-paste.ini file and add these lines: admin_tenant_name = service admin_user = neutron admin_password = NEUTRON_PASS Start the services: # service neutron-server restart
Network node To install and configure the network node Install the packages: # apt-get install neutron-plugin-openvswitch-agent \ neutron-dhcp-agent neutron-l3-agent # yum install openstack-neutron-openvswitch \ openstack-neutron # zypper install openstack-neutron-openvswitch-agent \ openstack-neutron openstack-neutron-dhcp-agent openstack-neutron-l3-agent Start Open vSwitch and configure it to start when the system boots: # service openvswitch-switch start # service openvswitch start # chkconfig openvswitch on # service openvswitch-switch start # chkconfig openvswitch-switch on Add the integration bridge to the Open vSwitch: # ovs-vsctl add-br br-int Update the OpenStack Networking /etc/neutron/neutron.conf configuration file: rabbit_password = guest rabbit_host = controller rabbit_password = RABBIT_PASS [database] connection = mysql://neutron:NEUTRON_DBPASS@controller:3306/neutron # openstack-config --set /etc/neutron/neutron.conf \ DEFAULT qpid_hostname controller # openstack-config --set /etc/neutron/neutron.conf \ database connection mysql://neutron:NEUTRON_DBPASS@controller:3306/neutron # openstack-config --set /etc/neutron/neutron.conf \ DEFAULT rabbit_host controller # openstack-config --set /etc/neutron/neutron.conf \ DEFAULT rabbit_password RABBIT_PASS # openstack-config --set /etc/neutron/neutron.conf \ database connection mysql://neutron:NEUTRON_DBPASS@controller:3306/neutron Update the plug-in /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini configuration file: [ovs] tenant_network_type=vlan network_vlan_ranges = physnet1:1:4094 bridge_mappings = physnet1:br-eth1 Create the br-eth1 network bridge. All VM communication between the nodes occurs through br-eth1: # ovs-vsctl add-br br-eth1 # ovs-vsctl add-port br-eth1 eth1 Create the external network bridge to the Open vSwitch: # ovs-vsctl add-br br-ex # ovs-vsctl add-port br-ex eth2 Edit the /etc/neutron/l3_agent.ini file and add these lines: [DEFAULT] auth_url = http://controller:35357/v2.0 admin_tenant_name = service admin_user = neutron admin_password = NEUTRON_PASS metadata_ip = controller use_namespaces = True Edit the /etc/neutron/api-paste.ini file and add these lines: [DEFAULT] auth_host = controller admin_tenant_name = service admin_user = neutron admin_password = NEUTRON_PASS Edit the /etc/neutron/dhcp_agent.ini file and add this line: use_namespaces = True Restart networking services: # service neutron-plugin-openvswitch-agent start # service neutron-dhcp-agent restart # service neutron-l3-agent restart Start and permanently enable networking services: # service neutron-openvswitch-agent start # service neutron-dhcp-agent start # service neutron-l3-agent start # chkconfig neutron-openvswitch-agent on # chkconfig neutron-dhcp-agent on # chkconfig neutron-l3-agent on # service openstack-neutron-openvswitch-agent start # service openstack-neutron-dhcp-agent start # service openstack-neutron-l3-agent start # chkconfig openstack-neutron-openvswitch-agent on # chkconfig openstack-neutron-dhcp-agent on # chkconfig openstack-neutron-l3-agent on Enable the neutron-ovs-cleanup service. This service starts on boot and ensures that Networking has full control over the creation and management of tap devices. # chkconfig neutron-ovs-cleanup on # chkconfig openstack-neutron-ovs-cleanup on
Compute Node To install and configure the compute node Install the packages: # apt-get install openvswitch-switch neutron-plugin-openvswitch-agent # zypper install openstack-neutron-openvswitch-agent # yum install openstack-neutron-openvswitch Start the OpenvSwitch service and configure it to start when the system boots: # service openvswitch-switch start # service openvswitch start # chkconfig openvswitch on # service openvswitch-switch start # chkconfig openvswitch-switch on Create the integration bridge: # ovs-vsctl add-br br-int Create the br-eth1 network bridge. All VM communication between the nodes occurs through br-eth1: # ovs-vsctl add-br br-eth1 # ovs-vsctl add-port br-eth1 eth1 Edit the OpenStack Networking /etc/neutron/neutron.conf configuration file and add this line: rabbit_password = guest rabbit_host = controller rabbit_password = RABBIT_PASS [database] connection = mysql://neutron:NEUTRON_DBPASS@controller:3306/neutron # openstack-config --set /etc/neutron/neutron.conf \ DEFAULT qpid_hostname controller # openstack-config --set /etc/neutron/neutron.conf \ database connection mysql://neutron:NEUTRON_DBPASS@controller:3306/neutron # openstack-config --set /etc/neutron/neutron.conf \ DEFAULT rabbit_host controller # openstack-config --set /etc/neutron/neutron.conf \ DEFAULT rabbit_password RABBIT_PASS # openstack-config --set /etc/neutron/neutron.conf \ database connection mysql://neutron:NEUTRON_DBPASS@controller:3306/neutron Edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file and add these lines: [ovs] tenant_network_type = vlan network_vlan_ranges = physnet1:1:4094 bridge_mappings = physnet1:br-eth1 Restart the OpenvSwitch Neutron plug-in agent: # service neutron-plugin-openvswitch-agent restart Start and permanently enable networking services: # service neutron-openvswitch-agent start # chkconfig neutron-openvswitch-agent on # service openstack-neutron-openvswitch-agent start # chkconfig openstack-neutron-openvswitch-agent on
Logical network configuration Run these commands on the network node. Ensure that the following environment variables are set. Various clients use these variables to access the Identity Service. Create a novarc file: export OS_TENANT_NAME=provider_tenant export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_AUTH_URL="http://controller:5000/v2.0/" export OS_SERVICE_ENDPOINT="http://controller:35357/v2.0" export OS_SERVICE_TOKEN=password Export the variables: # source novarc echo "source novarc">>.bashrc The admin user creates a network and subnet on behalf of tenant_A. A tenant_A user can also complete these steps. To configure internal networking Get the tenant ID (Used as $TENANT_ID later). # keystone tenant-list +----------------------------------+--------------------+---------+ | id | name | enabled | +----------------------------------+--------------------+---------+ | 48fb81ab2f6b409bafac8961a594980f | provider_tenant | True | | cbb574ac1e654a0a992bfc0554237abf | service | True | | e371436fe2854ed89cca6c33ae7a83cd | invisible_to_admin | True | | e40fa60181524f9f9ee7aa1038748f08 | tenant_A | True | +----------------------------------+--------------------+---------+ Create an internal network named net1 for tenant_A ($TENANT_ID will be e40fa60181524f9f9ee7aa1038748f08): # neutron net-create --tenant-id $TENANT_ID net1 +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | e99a361c-0af8-4163-9feb-8554d4c37e4f | | name | net1 | | provider:network_type | vlan | | provider:physical_network | physnet1 | | provider:segmentation_id | 1024 | | router:external | False | | shared | False | | status | ACTIVE | | subnets | | | tenant_id | e40fa60181524f9f9ee7aa1038748f08 | +---------------------------+--------------------------------------+ Create a subnet on the network net1 (ID field below is used as $SUBNET_ID later): # neutron subnet-create --tenant-id $TENANT_ID net1 10.5.5.0/24 +------------------+--------------------------------------------+ | Field | Value | +------------------+--------------------------------------------+ | allocation_pools | {"start": "10.5.5.2", "end": "10.5.5.254"} | | cidr | 10.5.5.0/24 | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | 10.5.5.1 | | host_routes | | | id | c395cb5d-ba03-41ee-8a12-7e792d51a167 | | ip_version | 4 | | name | | | network_id | e99a361c-0af8-4163-9feb-8554d4c37e4f | | tenant_id | e40fa60181524f9f9ee7aa1038748f08 | +------------------+--------------------------------------------+ A user with the admin role must complete these steps. In this procedure, the user is admin from provider_tenant. To configure the router and external networking Create a router1 route. The ID is used as $ROUTER_ID later: # neutron router-create router1 +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | admin_state_up | True | | external_gateway_info | | | id | 685f64e7-a020-4fdf-a8ad-e41194ae124b | | name | router1 | | status | ACTIVE | | tenant_id | 48fb81ab2f6b409bafac8961a594980f | +-----------------------+--------------------------------------+ The --tenant-id parameter is not specified, so this router is assigned to the provider_tenant tenant. Add an interface to the router1 router and attach it to the subnet from net1: # neutron router-interface-add $ROUTER_ID $SUBNET_ID Added interface to router 685f64e7-a020-4fdf-a8ad-e41194ae124b You can repeat this step to add more interfaces for other networks that belong to other tenants. Create the ext_net external network: # neutron net-create ext_net --router:external=True +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 8858732b-0400-41f6-8e5c-25590e67ffeb | | name | ext_net | | provider:network_type | vlan | | provider:physical_network | physnet1 | | provider:segmentation_id | 1 | | router:external | True | | shared | False | | status | ACTIVE | | subnets | | | tenant_id | 48fb81ab2f6b409bafac8961a594980f | +---------------------------+--------------------------------------+ Create the subnet for floating IPs. The DHCP service is disabled for this subnet. # neutron subnet-create ext_net \ --allocation-pool start=7.7.7.130,end=7.7.7.150 \ --gateway 7.7.7.1 7.7.7.0/24 --disable-dhcp +------------------+--------------------------------------------------+ | Field | Value | +------------------+--------------------------------------------------+ | allocation_pools | {"start": "7.7.7.130", "end": "7.7.7.150"} | | cidr | 7.7.7.0/24 | | dns_nameservers | | | enable_dhcp | False | | gateway_ip | 7.7.7.1 | | host_routes | | | id | aef60b55-cbff-405d-a81d-406283ac6cff | | ip_version | 4 | | name | | | network_id | 8858732b-0400-41f6-8e5c-25590e67ffeb | | tenant_id | 48fb81ab2f6b409bafac8961a594980f | +------------------+--------------------------------------------------+ Set the gateway for the router to the external network: # neutron router-gateway-set $ROUTER_ID $EXTERNAL_NETWORK_ID Set gateway for router 685f64e7-a020-4fdf-a8ad-e41194ae124b A user from tenant_A completes these steps, so the credentials in the environment variables are different than those in the previous procedure. To allocate floating IP addresses You can associate a floating IP address with a VM after it starts. Find the ID of the port ($PORT_ID) that was allocated for the VM, as follows: # nova list +--------------------------------------+--------+--------+---------------+ | ID | Name | Status | Networks | +--------------------------------------+--------+--------+---------------+ | 1cdc671d-a296-4476-9a75-f9ca1d92fd26 | testvm | ACTIVE | net1=10.5.5.3 | +--------------------------------------+--------+--------+---------------+ neutron port-list -- --device_id 1cdc671d-a296-4476-9a75-f9ca1d92fd26 +--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+ | 9aa47099-b87b-488c-8c1d-32f993626a30 | | fa:16:3e:b4:d6:6c | {"subnet_id": "c395cb5d-ba03-41ee-8a12-7e792d51a167", "ip_address": "10.5.5.3"} | +--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+ Allocate a floating IP (Used as $FLOATING_ID): # neutron floatingip-create ext_net +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | fixed_ip_address | | | floating_ip_address | 7.7.7.131 | | floating_network_id | 8858732b-0400-41f6-8e5c-25590e67ffeb | | id | 40952c83-2541-4d0c-b58e-812c835079a5 | | port_id | | | router_id | | | tenant_id | e40fa60181524f9f9ee7aa1038748f08 | +---------------------+--------------------------------------+ Associate the floating IP with the port for the VM: # neutron floatingip-associate $FLOATING_ID $PORT_ID Associated floatingip 40952c83-2541-4d0c-b58e-812c835079a5 Show the floating IP: # neutron floatingip-show $FLOATING_ID +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | fixed_ip_address | 10.5.5.3 | | floating_ip_address | 7.7.7.131 | | floating_network_id | 8858732b-0400-41f6-8e5c-25590e67ffeb | | id | 40952c83-2541-4d0c-b58e-812c835079a5 | | port_id | 9aa47099-b87b-488c-8c1d-32f993626a30 | | router_id | 685f64e7-a020-4fdf-a8ad-e41194ae124b | | tenant_id | e40fa60181524f9f9ee7aa1038748f08 | +---------------------+--------------------------------------+ Test the floating IP: # ping 7.7.7.131 PING 7.7.7.131 (7.7.7.131) 56(84) bytes of data. 64 bytes from 7.7.7.131: icmp_req=2 ttl=64 time=0.152 ms 64 bytes from 7.7.7.131: icmp_req=3 ttl=64 time=0.049 ms
Use case: provider router with private networks This use case provides each tenant with one or more private networks that connect to the outside world through an OpenStack Networking router. When each tenant gets exactly one network, this architecture maps to the same logical topology as the VlanManager in Compute (although of course, Networking does not require VLANs). Using the Networking API, the tenant can only see a network for each private network assigned to that tenant. The router object in the API is created and owned by the cloud administrator. This model supports assigning public addresses to VMs by using floating IPs; the router maps public addresses from the external network to fixed IPs on private networks. Hosts without floating IPs can still create outbound connections to the external network because the provider router performs SNAT to the router's external IP. The IP address of the physical router is used as the gateway_ip of the external network subnet, so the provider has a default router for Internet traffic. The router provides L3 connectivity among private networks. Tenants can reach instances for other tenants unless you use additional filtering, such as, security groups). With a single router, tenant networks cannot use overlapping IPs. To resolve this issue, the administrator can create private networks on behalf of the tenants.