diff --git a/doc/admin-guide-cloud/ch_networking.xml b/doc/admin-guide-cloud/ch_networking.xml index 02052a256f..73b64e2b29 100644 --- a/doc/admin-guide-cloud/ch_networking.xml +++ b/doc/admin-guide-cloud/ch_networking.xml @@ -1438,6 +1438,10 @@ enabled = True + + + +
Use Networking You can start and stop OpenStack Networking services @@ -2139,7 +2143,7 @@ enabled = True
-
+
Authentication and authorization Networking uses the Identity Service as the default authentication service. When the Identity Service is diff --git a/doc/config-reference/networking/section_networking-adv-config.xml b/doc/admin-guide-cloud/section_networking-adv-config.xml similarity index 99% rename from doc/config-reference/networking/section_networking-adv-config.xml rename to doc/admin-guide-cloud/section_networking-adv-config.xml index 878fd3b54a..1d570395d9 100644 --- a/doc/config-reference/networking/section_networking-adv-config.xml +++ b/doc/admin-guide-cloud/section_networking-adv-config.xml @@ -197,7 +197,8 @@ mysql> grant all on <database-name>.* to '<user-name>'@'%'; A driver needs to be configured that matches the plug-in running on the service. The driver is used to create the - routing interface. + routing interface. +
diff --git a/doc/config-reference/networking/section_networking-config-identity.xml b/doc/admin-guide-cloud/section_networking-config-identity.xml similarity index 56% rename from doc/config-reference/networking/section_networking-config-identity.xml rename to doc/admin-guide-cloud/section_networking-config-identity.xml index e3f2595906..9352fb1063 100644 --- a/doc/config-reference/networking/section_networking-config-identity.xml +++ b/doc/admin-guide-cloud/section_networking-config-identity.xml @@ -3,22 +3,21 @@ xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"> - Identity Service + Configure Identity Service for Networking To configure the Identity Service for use with Networking Create the <function>get_id()</function> function - The get_id() function stores the ID - of created objects, and removes error-prone copying and - pasting of object IDs in later steps: + The get_id() function stores the ID of created objects, and removes + the need to copy and paste object IDs in later steps: Add the following function to your .bashrc file: - $ function get_id () { +function get_id () { echo `"$@" | awk '/ id / { print $4 }'` -} +} Source the .bashrc file: @@ -28,35 +27,33 @@ echo `"$@" | awk '/ id / { print $4 }'` Create the Networking service entry - OpenStack Networking must be available in the OpenStack - Compute service catalog. Create the service: + Networking must be available in the Compute service catalog. Create the service: $ NEUTRON_SERVICE_ID=$(get_id keystone service-create --name neutron --type network --description 'OpenStack Networking Service') Create the Networking service endpoint entry - The way that you create an OpenStack Networking endpoint - entry depends on whether you are using the SQL catalog driver - or the template catalog driver: + The way that you create a Networking endpoint entry depends on whether you are using the + SQL or the template catalog driver: - If you use the SQL driver, run - these command with these parameters: specified region - ($REGION), IP address of the OpenStack Networking server - ($IP), and service ID ($NEUTRON_SERVICE_ID, obtained in - the previous step). - $ keystone endpoint-create --region $REGION --service-id $NEUTRON_SERVICE_ID --publicurl 'http://$IP:9696/' --adminurl 'http://$IP:9696/' --internalurl 'http://$IP:9696/' + If you use the SQL driver, run the following command with the + specified region ($REGION), IP address of the Networking server + ($IP), and service ID ($NEUTRON_SERVICE_ID, + obtained in the previous step). + + $ keystone endpoint-create --region $REGION --service-id $NEUTRON_SERVICE_ID \ + --publicurl 'http://$IP:9696/' --adminurl 'http://$IP:9696/' --internalurl 'http://$IP:9696/' For example: $ keystone endpoint-create --region myregion --service-id $NEUTRON_SERVICE_ID \ ---publicurl "http://10.211.55.17:9696/" --adminurl "http://10.211.55.17:9696/" --internalurl "http://10.211.55.17:9696/" + --publicurl "http://10.211.55.17:9696/" --adminurl "http://10.211.55.17:9696/" --internalurl "http://10.211.55.17:9696/" - If you are using the template - driver, add the following content to your - OpenStack Compute catalog template file - (default_catalog.templates), using these parameters: given - region ($REGION) and IP address of the OpenStack - Networking server ($IP). + If you are using the template driver, specify the following + parameters in your Compute catalog template file + (default_catalog.templates), along with the region + ($REGION) and IP address of the Networking server + ($IP). catalog.$REGION.network.publicURL = http://$IP:9696 catalog.$REGION.network.adminURL = http://$IP:9696 catalog.$REGION.network.internalURL = http://$IP:9696 @@ -65,19 +62,16 @@ catalog.$REGION.network.name = Network Service catalog.$Region.network.publicURL = http://10.211.55.17:9696 catalog.$Region.network.adminURL = http://10.211.55.17:9696 catalog.$Region.network.internalURL = http://10.211.55.17:9696 - catalog.$Region.network.name = Network Service +catalog.$Region.network.name = Network Service Create the Networking service user - You must provide admin user credentials that OpenStack - Compute and some internal components of OpenStack Networking - can use to access the OpenStack Networking API. The suggested - approach is to create a special service - tenant, create a neutron user within this - tenant, and to assign this user an admin - role. + You must provide admin user credentials that Compute and some internal Networking + components can use to access the Networking API. Create a special service + tenant and a neutron user within this tenant, and assign an + admin role to this role. Create the admin role: @@ -101,62 +95,47 @@ catalog.$Region.network.internalURL = http://10.211.55.17:9696 - For information about how to create service entries and users. - see the OpenStack Installation Guide for - your distribution (For information about how to create service entries and users, see the OpenStack + Installation Guide for your distribution (docs.openstack.org).
Compute - If you use OpenStack Networking, do not run the OpenStack - Compute nova-network - service (like you do in traditional OpenStack Compute - deployments). Instead, OpenStack Compute delegates most - network-related decisions to OpenStack Networking. OpenStack - Compute proxies tenant-facing API calls to manage security - groups and floating IPs to Networking APIs. However, - operator-facing tools such as nova-manage, are not proxied and should not be - used. + If you use Networking, do not run the Compute nova-network service (like you do in traditional Compute deployments). + Instead, Compute delegates most network-related decisions to Networking. Compute proxies + tenant-facing API calls to manage security groups and floating IPs to Networking APIs. + However, operator-facing tools such as nova-manage, + are not proxied and should not be used. - When you configure networking, you must use this guide. Do - not rely on OpenStack Compute networking documentation or past - experience with OpenStack Compute. If a - nova command or configuration option - related to networking is not mentioned in this guide, the - command is probably not supported for use with OpenStack - Networking. In particular, you cannot use CLI tools like - nova-manage and nova - to manage networks or IP addressing, including both fixed and - floating IPs, with OpenStack Networking. + When you configure networking, you must use this guide. Do not rely on Compute + networking documentation or past experience with Compute. If a nova + command or configuration option related to networking is not mentioned in this guide, the + command is probably not supported for use with Networking. In particular, you cannot use CLI + tools like nova-manage and nova to manage networks or + IP addressing, including both fixed and floating IPs, with Networking. - It is strongly recommended that you uninstall nova-network and reboot any - physical nodes that have been running nova-network before using them - to run OpenStack Networking. Inadvertently running the - nova-network - process while using OpenStack Networking can cause problems, - as can stale iptables rules pushed down by previously running - nova-network. - + Uninstall nova-network and reboot any physical + nodes that have been running nova-network before + using them to run Networking. Inadvertently running the nova-network process while using Networking can cause problems, as can stale + iptables rules pushed down by previously running nova-network. + - To ensure that OpenStack Compute works properly with - OpenStack Networking (rather than the legacy nova-network mechanism), you must - adjust settings in the nova.conf - configuration file. + To ensure that Compute works properly with Networking + (rather than the legacy nova-network mechanism), you must + adjust settings in the nova.conf + configuration file.
Networking API and credential configuration - Each time a VM is provisioned or de-provisioned in OpenStack - Compute, nova-* - services communicate with OpenStack Networking using the - standard API. For this to happen, you must configure the - following items in the nova.conf file (used - by each nova-compute - and nova-api - instance). + Each time you provision or de-provision a VM in Compute, nova-* services communicate with Networking using the standard API. For this + to happen, you must configure the following items in the nova.conf file + (used by each nova-compute and nova-api instance).
Basic settings
@@ -170,12 +149,13 @@ catalog.$Region.network.internalURL = http://10.211.55.17:9696 - + @@ -191,45 +171,46 @@ catalog.$Region.network.internalURL = http://10.211.55.17:9696 - + - + - + - +
nova.conf API and credential settings
network_api_classModify from the default to - nova.network.neutronv2.api.API, to - indicate that OpenStack Networking should be used rather - than the traditional nova-network networking model. - + Modify from the default to + nova.network.neutronv2.api.API, to + indicate that Networking should be used rather than the + traditional nova-network + networking model. +
neutron_url
neutron_admin_tenant_nameUpdate to the name of the service tenant created - in the above section on OpenStack Identity - configuration. + Update to the name of the service tenant created in + the above section on Identity configuration. +
neutron_admin_usernameUpdate to the name of the user created in the - above section on OpenStack Identity configuration. - + Update to the name of the user created in the above + section on Identity configuration. +
neutron_admin_passwordUpdate to the password of the user created in the - above section on OpenStack Identity configuration. - + Update to the password of the user created in the + above section on Identity configuration. +
neutron_admin_auth_urlUpdate to the OpenStack Identity server IP and - port. This is the Identity (keystone) admin API server - IP and port value, and not the Identity service API IP - and port. + Update to the Identity server IP and port. This is + the Identity (keystone) admin API server IP and port + value, and not the Identity service API IP and + port. +
Configure security groups - The OpenStack Networking Service provides security group - functionality using a mechanism that is more flexible and - powerful than the security group capabilities built into - OpenStack Compute. Therefore, if you use OpenStack Networking, - you should always disable built-in security groups and proxy all - security group calls to the OpenStack Networking API . If you do - not, security policies will conflict by being simultaneously - applied by both services. - To proxy security groups to OpenStack Networking, use the - following configuration values in - nova.conf: + The Networking Service provides security group functionality using a mechanism that is + more flexible and powerful than the security group capabilities built into Compute. Therefore, + if you use Networking, you should always disable built-in security groups and proxy all + security group calls to the Networking API . If you do not, security policies will conflict by + being simultaneously applied by both services. + To proxy security groups to Networking, use the following configuration values in + nova.conf: @@ -251,8 +232,7 @@ catalog.$Region.network.internalURL = http://10.211.55.17:9696 - @@ -260,13 +240,10 @@ catalog.$Region.network.internalURL = http://10.211.55.17:9696
Configure metadata - The OpenStack Compute service allows VMs to query metadata - associated with a VM by making a web request to a special - 169.254.169.254 address. OpenStack Networking supports proxying - those requests to nova-api, even when the requests are made from - isolated networks, or from multiple networks that use - overlapping IP addresses. + The Compute service allows VMs to query metadata associated with a VM by making a web + request to a special 169.254.169.254 address. Networking supports proxying those requests to + nova-api, even when the requests are made from + isolated networks, or from multiple networks that use overlapping IP addresses. To enable proxying the requests, you must update the following fields in nova.conf.
nova.conf security group settings
security_group_apiUpdate to neutron, so that all - security group requests are proxied to the OpenStack + Update to neutron, so that all security group requests are proxied to the Network Service.
@@ -323,10 +300,9 @@ catalog.$Region.network.internalURL = http://10.211.55.17:9696 Example nova.conf (for <systemitem class="service" >nova-compute</systemitem> and <systemitem class="service" >nova-api</systemitem>) - Example values for the above settings, assuming a cloud - controller node running OpenStack Compute and OpenStack - Networking with an IP address of 192.168.1.2. - network_api_class=nova.network.neutronv2.api.API + Example values for the above settings, assuming a cloud controller node running Compute + and Networking with an IP address of 192.168.1.2: +network_api_class=nova.network.neutronv2.api.API neutron_url=http://192.168.1.2:9696 neutron_auth_strategy=keystone neutron_admin_tenant_name=service @@ -339,6 +315,6 @@ firewall_driver=nova.virt.firewall.NoopFirewallDriver service_neutron_metadata_proxy=true neutron_metadata_proxy_shared_secret=foo - + diff --git a/doc/config-reference/networking/section_networking-multi-dhcp-agents.xml b/doc/admin-guide-cloud/section_networking-multi-dhcp-agents.xml similarity index 85% rename from doc/config-reference/networking/section_networking-multi-dhcp-agents.xml rename to doc/admin-guide-cloud/section_networking-multi-dhcp-agents.xml index 7368663ad9..944ff7b184 100644 --- a/doc/config-reference/networking/section_networking-multi-dhcp-agents.xml +++ b/doc/admin-guide-cloud/section_networking-multi-dhcp-agents.xml @@ -1,39 +1,4 @@ - - - - - - - -GET'> -PUT'> -POST'> -DELETE'> - - - - - - - - -'> - - - - - - - - -'> -]>
+-----------------+--------------------------+ - - - There will be three hosts in the setup. + There will be three hosts in the setup.
- + @@ -86,13 +49,11 @@ format="PNG" /> The node must have at least one network interface that is connected to the Management Network. - - - Note that nova-network should not be running because it is replaced by Neutron. - + @@ -105,14 +66,12 @@ format="PNG" />
Hosts for DemoHosts for demo
Host
HostA
+
Configuration - - - controlnode - Neutron - Server - - + + controlnode—Neutron Server + Neutron configuration file /etc/neutron/neutron.conf: [DEFAULT] @@ -121,8 +80,8 @@ rabbit_host = controlnode allow_overlapping_ips = True host = controlnode agent_down_time = 5 - - + + Update the plug-in configuration file /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini: [vlans] @@ -133,14 +92,11 @@ connection = mysql://root:root@127.0.0.1:3306/neutron_linux_bridge retry_interval = 2 [linux_bridge] physical_interface_mappings = physnet1:eth0 - - - - - HostA and HostB - L2 - Agent - - + + + + HostA and HostB—L2 Agent + Neutron configuration file /etc/neutron/neutron.conf: [DEFAULT] @@ -148,8 +104,8 @@ rabbit_host = controlnode rabbit_password = openstack # host = HostB on hostb host = HostA - - + + Update the plug-in configuration file /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini: [vlans] @@ -160,8 +116,8 @@ connection = mysql://root:root@127.0.0.1:3306/neutron_linux_bridge retry_interval = 2 [linux_bridge] physical_interface_mappings = physnet1:eth0 - - + + Update the nova configuration file /etc/nova/nova.conf: [DEFAULT] @@ -174,22 +130,17 @@ neutron_auth_strategy=keystone neutron_admin_tenant_name=servicetenant neutron_url=http://100.1.1.10:9696/ firewall_driver=nova.virt.firewall.NoopFirewallDriver - - - - - HostA and HostB - DHCP - Agent - - + + + + HostA and HostB—DHCP Agent + Update the DHCP configuration file /etc/neutron/dhcp_agent.ini: [DEFAULT] interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver - - - - + +
Commands in agent management and scheduler @@ -205,10 +156,9 @@ export OS_PASSWORD=adminpassword export OS_TENANT_NAME=admin export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting> </note> - <itemizedlist> - <listitem> - <para><emphasis role="bold">Settings</emphasis></para> - <para>To experiment, you need VMs and a neutron + <procedure> + <title>Settings + To experiment, you need VMs and a neutron network: $ nova list +--------------------------------------+-----------+--------+---------------+ @@ -225,17 +175,16 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/ +--------------------------------------+------+--------------------------------------+ | 89dca1c6-c7d4-4f7a-b730-549af0fb6e34 | net1 | f6c832e3-9968-46fd-8e45-d5cf646db9d1 | +--------------------------------------+------+--------------------------------------+ - - - Manage agents in neutron - deployment + + + + Manage agents in neutron deployment Every agent which supports these extensions will register itself with the neutron server when it starts up. - - - List all agents: - $ neutron agent-list + + List all agents: + $ neutron agent-list +--------------------------------------+--------------------+-------+-------+----------------+ | id | agent_type | host | alive | admin_state_up | +--------------------------------------+--------------------+-------+-------+----------------+ @@ -255,8 +204,8 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/ neutron.conf file. Otherwise the is xxx. - - + + List the DHCP agents that host a specified network In some deployments, one DHCP agent is @@ -275,8 +224,8 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/ | a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True | :-) | +--------------------------------------+-------+----------------+-------+ - - + + List the networks hosted by a given DHCP agent. This command is to show which networks a @@ -288,8 +237,8 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/ | 89dca1c6-c7d4-4f7a-b730-549af0fb6e34 | net1 | f6c832e3-9968-46fd-8e45-d5cf646db9d1 10.0.1.0/24 | +--------------------------------------+------+---------------------------------------------------+ - - + + Show agent details. The agent-list command shows details for a specified @@ -358,20 +307,17 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/ bridge-mapping and the number of virtual network devices on this L2 agent. - - - - - Manage assignment of - networks to DHCP agent + + + + Manage assignment of networks to DHCP agent Now that you have run the net-list-on-dhcp-agent and dhcp-agent-list-hosting-net commands, you can add a network to a DHCP agent and remove one from it. - - - Default scheduling. + + Default scheduling. When you create a network with one port, you can schedule it to an active DHCP agent. If many active DHCP agents are @@ -398,8 +344,8 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/ dnsmasq service only if there is a DHCP. - - + + Assign a network to a given DHCP agent. To add another DHCP agent to host the @@ -416,8 +362,8 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/ Both DHCP agents host the net2 network. - - + + Remove a network from a specified DHCP agent. This command is the sibling command for @@ -436,19 +382,16 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/ HostB is hosting the net2 network. - - - - - HA of DHCP - agents + + + + HA of DHCP agents Boot a VM on net2. Let both DHCP agents host net2. Fail the agents in turn to see if the VM can still get the desired IP. - - - Boot a VM on net2. + + Boot a VM on net2. $ neutron net-list +--------------------------------------+------+--------------------------------------------------+ | id | name | subnets | @@ -467,8 +410,8 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/ | c7c0481c-3db8-4d7a-a948-60ce8211d585 | myserver3 | ACTIVE | net1=10.0.1.5 | | f62f4731-5591-46b1-9d74-f0c901de567f | myserver4 | ACTIVE | net2=9.0.1.2 | +--------------------------------------+-----------+--------+---------------+ - - + + Make sure both DHCP agents hosting 'net2'. Use the previous commands to assign the @@ -480,10 +423,10 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/ | a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True | :-) | | f28aa126-6edb-4ea5-a81e-8850876bc0a8 | HostB | True | :-) | +--------------------------------------+-------+----------------+-------+ - - - - To test the HA + + + + Test the HA Log in to the myserver4 VM, @@ -518,11 +461,8 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/ VM gets the wanted IP again. - - - - - Disable and remove an agent + + Disable and remove an agent An administrator might want to disable an agent if a system hardware or software upgrade is planned. Some agents that support scheduling also @@ -532,7 +472,7 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/ agent. After the agent is disabled, you can safely remove the agent. Remove the resources on the agent before you delete the agent. - To run the following commands, you must stop the + To run the following commands, you must stop the DHCP agent on HostA. $ neutron agent-update --admin-state-up False a0c1c21c-d4f4-4577-9ec7-908f2d48622d $ neutron agent-list @@ -556,7 +496,7 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/ +--------------------------------------+--------------------+-------+-------+----------------+ After deletion, if you restart the DHCP agent, it appears on the agent list again. - - -
+ + +
diff --git a/doc/config-reference/networking/section_networking-scenarios.xml b/doc/admin-guide-cloud/section_networking-scenarios.xml similarity index 76% rename from doc/config-reference/networking/section_networking-scenarios.xml rename to doc/admin-guide-cloud/section_networking-scenarios.xml index 890edd80fb..cf63227325 100644 --- a/doc/config-reference/networking/section_networking-scenarios.xml +++ b/doc/admin-guide-cloud/section_networking-scenarios.xml @@ -9,8 +9,8 @@
Open vSwitch - This section describes how the Open vSwitch plug-in implements the OpenStack - Networking abstractions. + This section describes how the Open vSwitch plug-in implements the Networking + abstractions.
Configuration This example uses VLAN isolation on the switches to isolate tenant networks. This @@ -35,7 +35,7 @@ bridge_mappings = physnet2:br-eth1 - + Under the service tenant, create the shared router, define the @@ -76,7 +76,7 @@ bridge_mappings = physnet2:br-eth1 - + @@ -97,11 +97,10 @@ bridge_mappings = physnet2:br-eth1 is how hypervisors such as KVM and Xen implement a virtual network interface card (typically called a VIF or vNIC). An ethernet frame sent to a TAP device is received by the guest operating system. - A veth pair is a pair of virtual network - interfaces correctly directly together. An ethernet frame sent to one end of a veth - pair is received by the other end of a veth pair. OpenStack networking makes use of - veth pairs as virtual patch cables in order to make connections between virtual - bridges. + A veth pair is a pair of directly connected + virtual network interfaces. An ethernet frame sent to one end of a veth pair + is received by the other end of a veth pair. Networking uses veth pairs as + virtual patch cables to make connections between virtual bridges. A Linux bridge behaves like a hub: you can connect multiple (physical or virtual) network interfaces devices to a Linux bridge. Any ethernet frames that come in from one interface attached to the bridge is @@ -113,10 +112,10 @@ bridge_mappings = physnet2:br-eth1 Integration bridge - The br-int OpenvSwitch bridge is the integration bridge: all of - the guests running on the compute host connect to this bridge. OpenStack Networking - implements isolation across these guests by configuring the - br-int ports. + The br-int OpenvSwitch bridge is the integration bridge: all + guests running on the compute host connect to this bridge. Networking + implements isolation across these guests by configuring the + br-int ports. Physical connectivity bridge @@ -139,19 +138,19 @@ bridge_mappings = physnet2:br-eth1 Security groups: iptables and Linux bridges Ideally, the TAP device vnet0 would be connected directly to - the integration bridge, br-int. Unfortunately, this isn't - possible because of how OpenStack security groups are currently implemented. - OpenStack uses iptables rules on the TAP devices such as vnet0 to - implement security groups, and Open vSwitch is not compatible with iptables rules - that are applied directly on TAP devices that are connected to an Open vSwitch - port. - OpenStack Networking uses an extra Linux bridge and a veth pair as a workaround for - this issue. Instead of connecting vnet0 to an Open vSwitch - bridge, it is connected to a Linux bridge, - qbrXXX. This bridge is - connected to the integration bridge, br-int, through the - (qvbXXX, - qvoXXX) veth pair. + the integration bridge, br-int. Unfortunately, this isn't + possible because of how OpenStack security groups are currently implemented. + OpenStack uses iptables rules on the TAP devices such as + vnet0 to implement security groups, and Open vSwitch + is not compatible with iptables rules that are applied directly on TAP + devices that are connected to an Open vSwitch port. + Networking uses an extra Linux bridge and a veth pair as a workaround for this + issue. Instead of connecting vnet0 to an Open vSwitch + bridge, it is connected to a Linux bridge, + qbrXXX. This bridge is + connected to the integration bridge, br-int, through the + (qvbXXX, + qvoXXX) veth pair.
@@ -170,7 +169,7 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1 The following figure shows the network devices on the network host: - + As on the compute host, there is an Open vSwitch integration bridge @@ -187,99 +186,103 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1 packets traverse that veth pair in this example. Open vSwitch internal ports The network host uses Open vSwitch internal - ports. Internal ports enable you to assign one - or more IP addresses to an Open vSwitch bridge. In previous example, the - br-int bridge has four internal - ports: tapXXX, - qr-YYY, - qr-ZZZ, - tapWWW. Each internal port has - a separate IP address associated with it. An internal port, - qg-VVV, is on the br-ex bridge. + ports. Internal ports enable you to assign one or more IP + addresses to an Open vSwitch bridge. In previous example, the + br-int bridge has four internal ports: + tapXXX, + qr-YYY, + qr-ZZZ, and + tapWWW. Each internal + port has a separate IP address associated with it. An internal port, + qg-VVV, is on the br-ex + bridge. DHCP agent - By default, The OpenStack Networking DHCP agent uses a program called dnsmasq - to provide DHCP services to guests. OpenStack Networking must create an internal - port for each network that requires DHCP services and attach a dnsmasq process to - that port. In the previous example, the interface - tapXXX is on subnet - net01_subnet01, and the interface - tapWWW is on - net02_subnet01. + By default, The Networking DHCP agent uses a process called dnsmasq to provide + DHCP services to guests. Networking must create an internal port for each + network that requires DHCP services and attach a dnsmasq process to that + port. In the previous example, the + tapXXX interface is on + net01_subnet01, and the + tapWWW interface is on + net02_subnet01. L3 agent (routing) - The OpenStack Networking L3 agent implements routing through the use of Open - vSwitch internal ports and relies on the network host to route the packets across - the interfaces. In this example: interfaceqr-YYY, which is on - subnet net01_subnet01, has an IP address of 192.168.101.1/24, - interface qr-ZZZ, which is on subnet - net02_subnet01, has an IP address of - 192.168.102.1/24, and interface - qg-VVV, which has an IP - address of 10.64.201.254/24. Because of each of these interfaces - is visible to the network host operating system, it will route the packets - appropriately across the interfaces, as long as an administrator has enabled IP - forwarding. + The Networking L3 agent uses Open vSwitch internal ports to implement routing and + relies on the network host to route the packets across the interfaces. In + this example, the qr-YYY interface is on + net01_subnet01 and has the IP address + 192.168.101.1/24. The qr-ZZZ, + interface is on net02_subnet01 and has the IP address + 192.168.102.1/24. The + qg-VVV interface has + the IP address 10.64.201.254/24. Because each of these + interfaces is visible to the network host operating system, the network host + routes the packets across the interfaces, as long as an administrator has + enabled IP forwarding. The L3 agent uses iptables to implement floating IPs to do the network address translation (NAT). Overlapping subnets and network namespaces - One problem with using the host to implement routing is that there is a chance - that one of the OpenStack Networking subnets might overlap with one of the physical - networks that the host uses. For example, if the management network is implemented - on eth2 (not shown in the previous example), by coincidence happens - to also be on the 192.168.101.0/24 subnet, then this will cause - routing problems because it is impossible ot determine whether a packet on this - subnet should be sent to qr-YYY or eth2. In - general, if end-users are permitted to create their own logical networks and - subnets, then the system must be designed to avoid the possibility of such - collisions. - OpenStack Networking uses Linux network namespaces - to prevent collisions between the physical networks on the network host, - and the logical networks used by the virtual machines. It also prevents collisions - across different logical networks that are not routed to each other, as you will see - in the next scenario. - A network namespace can be thought of as an isolated environment that has its own - networking stack. A network namespace has its own network interfaces, routes, and - iptables rules. You can think of like a chroot jail, except for networking instead - of a file system. As an aside, LXC (Linux containers) use network namespaces to - implement networking virtualization. - OpenStack Networking creates network namespaces on the network host in order - to avoid subnet collisions. - Tn this example, there are three network namespaces, as depicted in the following figure. - - qdhcp-aaa: contains the - tapXXX interface - and the dnsmasq process that listens on that interface, to provide DHCP - services for net01_subnet01. This allows overlapping - IPs between net01_subnet01 and any other subnets on - the network host. - - - qrouter-bbbb: contains - the qr-YYY, - qr-ZZZ, and - qg-VVV interfaces, - and the corresponding routes. This namespace implements - router01 in our example. - - - qdhcp-ccc: contains the - tapWWW interface - and the dnsmasq process that listens on that interface, to provide DHCP - services for net02_subnet01. This allows overlapping - IPs between net02_subnet01 and any other subnets on - the network host. - - - - - - - - + One problem with using the host to implement routing is that one of the + Networking subnets might overlap with one of the physical networks that the + host uses. For example, if the management network is implemented on + eth2 and also happens to be on the + 192.168.101.0/24 subnet, routing problems will occur + because the host can't determine whether to send a packet on this subnet to + qr-YYY or eth2. If end users are + permitted to create their own logical networks and subnets, you must design + the system so that such collisions do not occur. + Networking uses Linux network namespaces to + prevent collisions between the physical networks on the network host, and + the logical networks used by the virtual machines. It also prevents + collisions across different logical networks that are not routed to each + other, as the following scenario shows. + A network namespace is an isolated environment with its own networking stack. A + network namespace has its own network interfaces, routes, and iptables + rules. Consider it a chroot jail, except for networking instead of for a + file system. LXC (Linux containers) use network namespaces to implement + networking virtualization. + Networking creates network namespaces on the network host to avoid subnet + collisions. + + + + + + In this example, there are three network namespaces, as shown in the figure above: + + qdhcp-aaa: + contains the + tapXXX + interface and the dnsmasq process that listens on that interface + to provide DHCP services for net01_subnet01. + This allows overlapping IPs between + net01_subnet01 and any other subnets on + the network host. + + + qrouter-bbbb: + contains the + qr-YYY, + qr-ZZZ, + and qg-VVV + interfaces, and the corresponding routes. This namespace + implements router01 in our example. + + + qdhcp-ccc: + contains the + tapWWW + interface and the dnsmasq process that listens on that + interface, to provide DHCP services for + net02_subnet01. This allows overlapping + IPs between net02_subnet01 and any other + subnets on the network host. + +
@@ -292,7 +295,7 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1 - + Under the service tenant, define the public @@ -334,7 +337,7 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1 - + The Compute host configuration resembles the @@ -349,7 +352,7 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1 scenario. - + In this configuration, the network namespaces are @@ -358,7 +361,7 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1 - + In this scenario, there are four network namespaces @@ -373,8 +376,8 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1
Linux Bridge - This section describes how the Linux Bridge plug-in implements the OpenStack - Networking abstractions. For information about DHCP and L3 agents, see This section describes how the Linux Bridge plug-in implements the Networking + abstractions. For information about DHCP and L3 agents, see .
Configuration @@ -400,7 +403,7 @@ physical_interface_mappings = physnet2:eth1 - + Under the service tenant, create the shared router, define the @@ -440,7 +443,7 @@ physical_interface_mappings = physnet2:eth1 - + @@ -478,14 +481,14 @@ physical_interface_mappings = physnet2:eth1 The following figure shows the network devices on the network host. - + The following figure shows how the Linux Bridge plug-in uses network namespaces to provide isolation.veth pairs form connections between the Linux bridges and the network namespaces. - +
@@ -497,7 +500,7 @@ physical_interface_mappings = physnet2:eth1 Internet. - + Under the service tenant, define the public @@ -540,7 +543,7 @@ physical_interface_mappings = physnet2:eth1 - + The configuration on the compute host is very similar to the configuration in scenario 1. The @@ -553,7 +556,7 @@ physical_interface_mappings = physnet2:eth1 scenario. - + The main difference between the configuration in this scenario and the previous one @@ -561,7 +564,7 @@ physical_interface_mappings = physnet2:eth1 across the two subnets, as shown in the following figure. - + In this scenario, there are four network namespaces @@ -592,7 +595,7 @@ physical_interface_mappings = physnet2:eth1 illustrated below. - @@ -602,7 +605,7 @@ physical_interface_mappings = physnet2:eth1 This is achieved by sending broadcasts packets over unicasts only to the relevant agents as illustrated below. - The partial-mesh is available with the Open vSwitch and diff --git a/doc/admin-guide-cloud/section_networking_adv_features.xml b/doc/admin-guide-cloud/section_networking_adv_features.xml index 82eaf6ac4f..4f4813601e 100644 --- a/doc/admin-guide-cloud/section_networking_adv_features.xml +++ b/doc/admin-guide-cloud/section_networking_adv_features.xml @@ -233,7 +233,7 @@ actions for users with the admin role. An authorized client or an administrative user can view and set the provider extended attributes through Networking API - calls. See for details + calls. See for details on policy configuration.
diff --git a/doc/config-reference/ch_networkingconfigure.xml b/doc/config-reference/ch_networkingconfigure.xml index 2fd49864e8..d5d209e7f2 100644 --- a/doc/config-reference/ch_networkingconfigure.xml +++ b/doc/config-reference/ch_networkingconfigure.xml @@ -8,12 +8,11 @@ xmlns:ns3="http://www.w3.org/1998/Math/MathML" xmlns:ns="http://docbook.org/ns/docbook"> Networking - This chapter explains the configuration options and scenarios for OpenStack Networking. - For installation prerequisites, steps, and use cases, refer to corresponding chapter in the - OpenStack Installation Guide. + This chapter explains the OpenStack Networking configuration options. For installation + prerequisites, steps, and use cases, see the OpenStack Installation + Guide for your distribution (docs.openstack.org) and Cloud + Administrator Guide. - - - -