Merge "Move Networking scenarios section from Configuration Reference to Cloud Admin Guide"

This commit is contained in:
Jenkins 2014-01-04 16:08:48 +00:00 committed by Gerrit Code Review
commit 9c8a43d420
7 changed files with 312 additions and 389 deletions

View File

@ -1490,6 +1490,10 @@ enabled = True</programlisting>
</para>
</section>
</section>
<xi:include href="section_networking-config-identity.xml"/>
<xi:include href="section_networking-scenarios.xml"/>
<xi:include href="section_networking-adv-config.xml"/>
<xi:include href="section_networking-multi-dhcp-agents.xml"/>
<section xml:id="section_networking-use">
<title>Use Networking</title>
<para>You can start and stop OpenStack Networking services
@ -2191,7 +2195,7 @@ enabled = True</programlisting>
</section>
<xi:include href="section_networking_adv_features.xml"/>
<xi:include href="section_networking_adv_operational_features.xml"/>
<section xml:id="section_auth">
<section xml:id="section_networking_auth">
<title>Authentication and authorization</title>
<para>Networking uses the Identity Service as the default
authentication service. When the Identity Service is

View File

@ -197,7 +197,8 @@ mysql&gt; grant all on &lt;database-name&gt;.* to '&lt;user-name&gt;'@'%';</comp
--config-file &lt;l3 config&gt;</computeroutput></screen>
<para>A driver needs to be configured that matches the plug-in
running on the service. The driver is used to create the
routing interface. <table rules="all">
routing interface.
<table rules="all">
<caption>Basic settings</caption>
<col width="50%"/>
<col width="50%"/>

View File

@ -3,22 +3,21 @@
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Identity Service</title>
<title>Configure Identity Service for Networking</title>
<procedure>
<title>To configure the Identity Service for use with
Networking</title>
<step>
<title>Create the <function>get_id()</function> function</title>
<para>The <function>get_id()</function> function stores the ID
of created objects, and removes error-prone copying and
pasting of object IDs in later steps:</para>
<para>The <function>get_id()</function> function stores the ID of created objects, and removes
the need to copy and paste object IDs in later steps:</para>
<substeps>
<step>
<para>Add the following function to your
<filename>.bashrc</filename> file:</para>
<screen><prompt>$</prompt> <userinput>function get_id () {
<programlisting>function get_id () {
echo `"$@" | awk '/ id / { print $4 }'`
}</userinput></screen>
}</programlisting>
</step>
<step>
<para>Source the <filename>.bashrc</filename> file:</para>
@ -28,35 +27,33 @@ echo `"$@" | awk '/ id / { print $4 }'`
</step>
<step>
<title>Create the Networking service entry</title>
<para>OpenStack Networking must be available in the OpenStack
Compute service catalog. Create the service:</para>
<para>Networking must be available in the Compute service catalog. Create the service:</para>
<screen><prompt>$</prompt> <userinput>NEUTRON_SERVICE_ID=$(get_id keystone service-create --name neutron --type network --description 'OpenStack Networking Service')</userinput></screen>
</step>
<step>
<title>Create the Networking service endpoint
entry</title>
<para>The way that you create an OpenStack Networking endpoint
entry depends on whether you are using the SQL catalog driver
or the template catalog driver:</para>
<para>The way that you create a Networking endpoint entry depends on whether you are using the
SQL or the template catalog driver:</para>
<itemizedlist>
<listitem>
<para>If you use the <emphasis>SQL driver</emphasis>, run
these command with these parameters: specified region
($REGION), IP address of the OpenStack Networking server
($IP), and service ID ($NEUTRON_SERVICE_ID, obtained in
the previous step).</para>
<screen><prompt>$</prompt> <userinput>keystone endpoint-create --region $REGION --service-id $NEUTRON_SERVICE_ID --publicurl 'http://$IP:9696/' --adminurl 'http://$IP:9696/' --internalurl 'http://$IP:9696/'</userinput></screen>
<para>If you use the <emphasis>SQL driver</emphasis>, run the following command with the
specified region (<literal>$REGION</literal>), IP address of the Networking server
(<literal>$IP</literal>), and service ID (<literal>$NEUTRON_SERVICE_ID</literal>,
obtained in the previous step).</para>
<screen><prompt>$</prompt> <userinput>keystone endpoint-create --region $REGION --service-id $NEUTRON_SERVICE_ID \
--publicurl 'http://$IP:9696/' --adminurl 'http://$IP:9696/' --internalurl 'http://$IP:9696/'</userinput></screen>
<para>For example:</para>
<screen><prompt>$</prompt> <userinput>keystone endpoint-create --region myregion --service-id $NEUTRON_SERVICE_ID \
--publicurl "http://10.211.55.17:9696/" --adminurl "http://10.211.55.17:9696/" --internalurl "http://10.211.55.17:9696/" </userinput></screen>
--publicurl "http://10.211.55.17:9696/" --adminurl "http://10.211.55.17:9696/" --internalurl "http://10.211.55.17:9696/" </userinput></screen>
</listitem>
<listitem>
<para>If you are using the <emphasis>template
driver</emphasis>, add the following content to your
OpenStack Compute catalog template file
(default_catalog.templates), using these parameters: given
region ($REGION) and IP address of the OpenStack
Networking server ($IP).</para>
<para>If you are using the <emphasis>template driver</emphasis>, specify the following
parameters in your Compute catalog template file
(<filename>default_catalog.templates</filename>), along with the region
(<literal>$REGION</literal>) and IP address of the Networking server
(<literal>$IP</literal>).</para>
<programlisting language="bash">catalog.$REGION.network.publicURL = http://$IP:9696
catalog.$REGION.network.adminURL = http://$IP:9696
catalog.$REGION.network.internalURL = http://$IP:9696
@ -65,19 +62,16 @@ catalog.$REGION.network.name = Network Service</programlisting>
<programlisting language="bash">catalog.$Region.network.publicURL = http://10.211.55.17:9696
catalog.$Region.network.adminURL = http://10.211.55.17:9696
catalog.$Region.network.internalURL = http://10.211.55.17:9696
catalog.$Region.network.name = Network Service</programlisting>
catalog.$Region.network.name = Network Service</programlisting>
</listitem>
</itemizedlist>
</step>
<step>
<title>Create the Networking service user</title>
<para>You must provide admin user credentials that OpenStack
Compute and some internal components of OpenStack Networking
can use to access the OpenStack Networking API. The suggested
approach is to create a special <literal>service</literal>
tenant, create a <literal>neutron</literal> user within this
tenant, and to assign this user an <literal>admin</literal>
role.</para>
<para>You must provide admin user credentials that Compute and some internal Networking
components can use to access the Networking API. Create a special <literal>service</literal>
tenant and a <literal>neutron</literal> user within this tenant, and assign an
<literal>admin</literal> role to this role.</para>
<substeps>
<step>
<para>Create the <literal>admin</literal> role:</para>
@ -101,62 +95,47 @@ catalog.$Region.network.internalURL = http://10.211.55.17:9696
</substeps>
</step>
</procedure>
<para>For information about how to create service entries and users.
see the <citetitle>OpenStack Installation Guide</citetitle> for
your distribution (<link xlink:href="docs.openstack.org"
<para>For information about how to create service entries and users, see the <citetitle>OpenStack
Installation Guide</citetitle> for your distribution (<link xlink:href="docs.openstack.org"
>docs.openstack.org</link>).</para>
<section xml:id="nova_with_neutron">
<title>Compute</title>
<para>If you use OpenStack Networking, do not run the OpenStack
Compute <systemitem class="service">nova-network</systemitem>
service (like you do in traditional OpenStack Compute
deployments). Instead, OpenStack Compute delegates most
network-related decisions to OpenStack Networking. OpenStack
Compute proxies tenant-facing API calls to manage security
groups and floating IPs to Networking APIs. However,
operator-facing tools such as <systemitem class="service"
>nova-manage</systemitem>, are not proxied and should not be
used.</para>
<para>If you use Networking, do not run the Compute <systemitem class="service"
>nova-network</systemitem> service (like you do in traditional Compute deployments).
Instead, Compute delegates most network-related decisions to Networking. Compute proxies
tenant-facing API calls to manage security groups and floating IPs to Networking APIs.
However, operator-facing tools such as <systemitem class="service">nova-manage</systemitem>,
are not proxied and should not be used.</para>
<warning>
<para>When you configure networking, you must use this guide. Do
not rely on OpenStack Compute networking documentation or past
experience with OpenStack Compute. If a
<command>nova</command> command or configuration option
related to networking is not mentioned in this guide, the
command is probably not supported for use with OpenStack
Networking. In particular, you cannot use CLI tools like
<command>nova-manage</command> and <command>nova</command>
to manage networks or IP addressing, including both fixed and
floating IPs, with OpenStack Networking.</para>
<para>When you configure networking, you must use this guide. Do not rely on Compute
networking documentation or past experience with Compute. If a <command>nova</command>
command or configuration option related to networking is not mentioned in this guide, the
command is probably not supported for use with Networking. In particular, you cannot use CLI
tools like <command>nova-manage</command> and <command>nova</command> to manage networks or
IP addressing, including both fixed and floating IPs, with Networking.</para>
</warning>
<note>
<para>It is strongly recommended that you uninstall <systemitem
class="service">nova-network</systemitem> and reboot any
physical nodes that have been running <systemitem
class="service">nova-network</systemitem> before using them
to run OpenStack Networking. Inadvertently running the
<systemitem class="service">nova-network</systemitem>
process while using OpenStack Networking can cause problems,
as can stale iptables rules pushed down by previously running
<systemitem class="service">nova-network</systemitem>.
</para>
<para>Uninstall <systemitem class="service">nova-network</systemitem> and reboot any physical
nodes that have been running <systemitem class="service">nova-network</systemitem> before
using them to run Networking. Inadvertently running the <systemitem class="service"
>nova-network</systemitem> process while using Networking can cause problems, as can stale
iptables rules pushed down by previously running <systemitem class="service"
>nova-network</systemitem>.</para>
</note>
<para>To ensure that OpenStack Compute works properly with
OpenStack Networking (rather than the legacy <systemitem
class="service">nova-network</systemitem> mechanism), you must
adjust settings in the <filename>nova.conf</filename>
configuration file.</para>
<para>To ensure that Compute works properly with Networking
(rather than the legacy <systemitem
class="service">nova-network</systemitem> mechanism), you must
adjust settings in the <filename>nova.conf</filename>
configuration file.</para>
</section>
<section xml:id="nova_with_neutron_api">
<title>Networking API and credential configuration</title>
<para>Each time a VM is provisioned or de-provisioned in OpenStack
Compute, <systemitem class="service">nova-*</systemitem>
services communicate with OpenStack Networking using the
standard API. For this to happen, you must configure the
following items in the <filename>nova.conf</filename> file (used
by each <systemitem class="service">nova-compute</systemitem>
and <systemitem class="service">nova-api</systemitem>
instance).</para>
<para>Each time you provision or de-provision a VM in Compute, <systemitem class="service"
>nova-*</systemitem> services communicate with Networking using the standard API. For this
to happen, you must configure the following items in the <filename>nova.conf</filename> file
(used by each <systemitem class="service">nova-compute</systemitem> and <systemitem
class="service">nova-api</systemitem> instance).</para>
<table rules="all">
<caption>nova.conf API and credential settings</caption>
<col width="20%"/>
@ -170,12 +149,13 @@ catalog.$Region.network.internalURL = http://10.211.55.17:9696
<tbody>
<tr>
<td><para><literal>network_api_class</literal></para></td>
<td><para>Modify from the default to
<literal>nova.network.neutronv2.api.API</literal>, to
indicate that OpenStack Networking should be used rather
than the traditional <systemitem class="service"
>nova-network </systemitem> networking model.
</para></td>
<td>
<para>Modify from the default to
<literal>nova.network.neutronv2.api.API</literal>, to
indicate that Networking should be used rather than the
traditional <systemitem class="service" >nova-network
</systemitem> networking model.</para>
</td>
</tr>
<tr>
<td><para><literal>neutron_url</literal></para></td>
@ -191,45 +171,46 @@ catalog.$Region.network.internalURL = http://10.211.55.17:9696
</tr>
<tr>
<td><para><literal>neutron_admin_tenant_name</literal></para></td>
<td><para>Update to the name of the service tenant created
in the above section on OpenStack Identity
configuration.</para></td>
<td>
<para>Update to the name of the service tenant created in
the above section on Identity configuration.</para>
</td>
</tr>
<tr>
<td><para><literal>neutron_admin_username</literal></para></td>
<td><para>Update to the name of the user created in the
above section on OpenStack Identity configuration.
</para></td>
<td>
<para>Update to the name of the user created in the above
section on Identity configuration.</para>
</td>
</tr>
<tr>
<td><para><literal>neutron_admin_password</literal></para></td>
<td><para>Update to the password of the user created in the
above section on OpenStack Identity configuration.
</para></td>
<td>
<para>Update to the password of the user created in the
above section on Identity configuration.</para>
</td>
</tr>
<tr>
<td><para><literal>neutron_admin_auth_url</literal></para></td>
<td><para>Update to the OpenStack Identity server IP and
port. This is the Identity (keystone) admin API server
IP and port value, and not the Identity service API IP
and port.</para></td>
<td>
<para>Update to the Identity server IP and port. This is
the Identity (keystone) admin API server IP and port
value, and not the Identity service API IP and
port.</para>
</td>
</tr>
</tbody>
</table>
</section>
<section xml:id="nova_config_security_groups">
<title>Configure security groups</title>
<para>The OpenStack Networking Service provides security group
functionality using a mechanism that is more flexible and
powerful than the security group capabilities built into
OpenStack Compute. Therefore, if you use OpenStack Networking,
you should always disable built-in security groups and proxy all
security group calls to the OpenStack Networking API . If you do
not, security policies will conflict by being simultaneously
applied by both services.</para>
<para>To proxy security groups to OpenStack Networking, use the
following configuration values in
<filename>nova.conf</filename>:</para>
<para>The Networking Service provides security group functionality using a mechanism that is
more flexible and powerful than the security group capabilities built into Compute. Therefore,
if you use Networking, you should always disable built-in security groups and proxy all
security group calls to the Networking API . If you do not, security policies will conflict by
being simultaneously applied by both services.</para>
<para>To proxy security groups to Networking, use the following configuration values in
<filename>nova.conf</filename>:</para>
<table rules="all">
<caption>nova.conf security group settings</caption>
<col width="20%"/>
@ -251,8 +232,7 @@ catalog.$Region.network.internalURL = http://10.211.55.17:9696
</tr>
<tr>
<td><para><literal>security_group_api</literal></para></td>
<td><para>Update to <literal>neutron</literal>, so that all
security group requests are proxied to the OpenStack
<td><para>Update to <literal>neutron</literal>, so that all security group requests are proxied to the
Network Service.</para></td>
</tr>
</tbody>
@ -260,13 +240,10 @@ catalog.$Region.network.internalURL = http://10.211.55.17:9696
</section>
<section xml:id="nova_config_metadata">
<title>Configure metadata</title>
<para>The OpenStack Compute service allows VMs to query metadata
associated with a VM by making a web request to a special
169.254.169.254 address. OpenStack Networking supports proxying
those requests to <systemitem class="service"
>nova-api</systemitem>, even when the requests are made from
isolated networks, or from multiple networks that use
overlapping IP addresses.</para>
<para>The Compute service allows VMs to query metadata associated with a VM by making a web
request to a special 169.254.169.254 address. Networking supports proxying those requests to
<systemitem class="service">nova-api</systemitem>, even when the requests are made from
isolated networks, or from multiple networks that use overlapping IP addresses.</para>
<para>To enable proxying the requests, you must update the
following fields in <filename>nova.conf</filename>.</para>
<table rules="all">
@ -323,10 +300,9 @@ catalog.$Region.network.internalURL = http://10.211.55.17:9696
<title>Example nova.conf (for <systemitem class="service"
>nova-compute</systemitem> and <systemitem class="service"
>nova-api</systemitem>)</title>
<para>Example values for the above settings, assuming a cloud
controller node running OpenStack Compute and OpenStack
Networking with an IP address of 192.168.1.2.</para>
<screen><computeroutput>network_api_class=nova.network.neutronv2.api.API
<para>Example values for the above settings, assuming a cloud controller node running Compute
and Networking with an IP address of 192.168.1.2:</para>
<programlisting language="ini">network_api_class=nova.network.neutronv2.api.API
neutron_url=http://192.168.1.2:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
@ -339,6 +315,6 @@ firewall_driver=nova.virt.firewall.NoopFirewallDriver
service_neutron_metadata_proxy=true
neutron_metadata_proxy_shared_secret=foo
</computeroutput></screen>
</programlisting>
</section>
</section>

View File

@ -1,39 +1,4 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!-- Some useful entities borrowed from HTML -->
<!ENTITY ndash "&#x2013;">
<!ENTITY mdash "&#x2014;">
<!ENTITY hellip "&#x2026;">
<!ENTITY plusmn "&#xB1;">
<!-- Useful for describing APIs -->
<!ENTITY GET '<command xmlns="http://docbook.org/ns/docbook">GET</command>'>
<!ENTITY PUT '<command xmlns="http://docbook.org/ns/docbook">PUT</command>'>
<!ENTITY POST '<command xmlns="http://docbook.org/ns/docbook">POST</command>'>
<!ENTITY DELETE '<command xmlns="http://docbook.org/ns/docbook">DELETE</command>'>
<!ENTITY CHECK '<inlinemediaobject xmlns="http://docbook.org/ns/docbook">
<imageobject role="fo">
<imagedata fileref="figures/Check_mark_23x20_02.svg"
format="SVG" scale="60"/>
</imageobject>
<imageobject role="html">
<imagedata fileref="../figures/Check_mark_23x20_02.png"
format="PNG" />
</imageobject>
</inlinemediaobject>'>
<!ENTITY ARROW '<inlinemediaobject xmlns="http://docbook.org/ns/docbook">
<imageobject role="fo">
<imagedata fileref="figures/Arrow_east.svg"
format="SVG" scale="60"/>
</imageobject>
<imageobject role="html">
<imagedata fileref="../figures/Arrow_east.png"
format="PNG" />
</imageobject>
</inlinemediaobject>'>
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
@ -60,18 +25,16 @@ format="PNG" />
+-----------------+--------------------------+
</computeroutput></screen></para>
</note>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata
fileref="../../common/figures/demo_multiple_dhcp_agents.png"
fileref="../common/figures/demo_multiple_dhcp_agents.png"
contentwidth="6in"/>
</imageobject>
</mediaobject>
</informalfigure>
<para>There will be three hosts in the setup.</para>
<para>There will be three hosts in the setup.
<table rules="all">
<caption>Hosts for Demo</caption>
<caption>Hosts for demo</caption>
<thead>
<tr>
<th>Host</th>
@ -86,13 +49,11 @@ format="PNG" />
The node must have at least one network
interface that is connected to the
Management Network.</para>
<note>
<para>
<systemitem class="service"
<para>Note that <systemitem class="service"
>nova-network</systemitem> should
not be running because it is replaced
by Neutron.</para>
</note></td>
</td>
</tr>
<tr>
<td>HostA</td>
@ -105,14 +66,12 @@ format="PNG" />
</tr>
</tbody>
</table>
</para>
<section xml:id="multi_agent_demo_configuration">
<title>Configuration</title>
<itemizedlist>
<listitem>
<para><emphasis role="bold">controlnode - Neutron
Server</emphasis></para>
<orderedlist>
<listitem>
<procedure>
<title>controlnode&#151;Neutron Server</title>
<step>
<para>Neutron configuration file
<filename>/etc/neutron/neutron.conf</filename>:</para>
<programlisting language="ini">[DEFAULT]
@ -121,8 +80,8 @@ rabbit_host = controlnode
allow_overlapping_ips = True
host = controlnode
agent_down_time = 5</programlisting>
</listitem>
<listitem>
</step>
<step>
<para>Update the plug-in configuration file
<filename>/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini</filename>:</para>
<programlisting language="ini">[vlans]
@ -133,14 +92,11 @@ connection = mysql://root:root@127.0.0.1:3306/neutron_linux_bridge
retry_interval = 2
[linux_bridge]
physical_interface_mappings = physnet1:eth0</programlisting>
</listitem>
</orderedlist>
</listitem>
<listitem>
<para><emphasis role="bold">HostA and HostB - L2
Agent</emphasis></para>
<orderedlist>
<listitem>
</step>
</procedure>
<procedure>
<title>HostA and HostB&#151;L2 Agent</title>
<step>
<para>Neutron configuration file
<filename>/etc/neutron/neutron.conf</filename>:</para>
<programlisting language="ini">[DEFAULT]
@ -148,8 +104,8 @@ rabbit_host = controlnode
rabbit_password = openstack
# host = HostB on hostb
host = HostA</programlisting>
</listitem>
<listitem>
</step>
<step>
<para>Update the plug-in configuration file
<filename>/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini</filename>:</para>
<programlisting language="ini">[vlans]
@ -160,8 +116,8 @@ connection = mysql://root:root@127.0.0.1:3306/neutron_linux_bridge
retry_interval = 2
[linux_bridge]
physical_interface_mappings = physnet1:eth0</programlisting>
</listitem>
<listitem>
</step>
<step>
<para>Update the nova configuration file
<filename>/etc/nova/nova.conf</filename>:</para>
<programlisting language="ini">[DEFAULT]
@ -174,22 +130,17 @@ neutron_auth_strategy=keystone
neutron_admin_tenant_name=servicetenant
neutron_url=http://100.1.1.10:9696/
firewall_driver=nova.virt.firewall.NoopFirewallDriver</programlisting>
</listitem>
</orderedlist>
</listitem>
<listitem>
<para><emphasis role="bold">HostA and HostB - DHCP
Agent</emphasis></para>
<orderedlist>
<listitem>
</step>
</procedure>
<procedure>
<title>HostA and HostB&#151;DHCP Agent</title>
<step>
<para>Update the DHCP configuration file
<filename>/etc/neutron/dhcp_agent.ini</filename>:</para>
<programlisting language="ini">[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver</programlisting>
</listitem>
</orderedlist>
</listitem>
</itemizedlist>
</step>
</procedure>
</section>
<section xml:id="demo_multiple_operation">
<title>Commands in agent management and scheduler
@ -205,10 +156,9 @@ export OS_PASSWORD=adminpassword
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
</note>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Settings</emphasis></para>
<para>To experiment, you need VMs and a neutron
<procedure>
<title>Settings</title>
<step><para>To experiment, you need VMs and a neutron
network:</para>
<screen><prompt>$</prompt> <userinput>nova list</userinput>
<computeroutput>+--------------------------------------+-----------+--------+---------------+
@ -225,17 +175,16 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
+--------------------------------------+------+--------------------------------------+
| 89dca1c6-c7d4-4f7a-b730-549af0fb6e34 | net1 | f6c832e3-9968-46fd-8e45-d5cf646db9d1 |
+--------------------------------------+------+--------------------------------------+</computeroutput></screen>
</listitem>
<listitem>
<para><emphasis role="bold">Manage agents in neutron
deployment</emphasis></para>
</step>
</procedure>
<procedure>
<title>Manage agents in neutron deployment</title>
<para>Every agent which supports these extensions will
register itself with the neutron server when it
starts up.</para>
<orderedlist>
<listitem>
<para>List all agents:</para>
<screen><prompt>$</prompt> <userinput>neutron agent-list</userinput>
<step>
<para>List all agents:</para>
<screen><prompt>$</prompt> <userinput>neutron agent-list</userinput>
<computeroutput>+--------------------------------------+--------------------+-------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+-------+-------+----------------+
@ -255,8 +204,8 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
<filename>neutron.conf</filename>
file. Otherwise the <option>alive</option>
is <literal>xxx</literal>.</para>
</listitem>
<listitem>
</step>
<step>
<para>List the DHCP agents that host a
specified network</para>
<para>In some deployments, one DHCP agent is
@ -275,8 +224,8 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True | :-) |
+--------------------------------------+-------+----------------+-------+
</computeroutput></screen>
</listitem>
<listitem>
</step>
<step>
<para>List the networks hosted by a given DHCP
agent.</para>
<para>This command is to show which networks a
@ -288,8 +237,8 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
| 89dca1c6-c7d4-4f7a-b730-549af0fb6e34 | net1 | f6c832e3-9968-46fd-8e45-d5cf646db9d1 10.0.1.0/24 |
+--------------------------------------+------+---------------------------------------------------+
</computeroutput></screen>
</listitem>
<listitem>
</step>
<step>
<para>Show agent details.</para>
<para>The <command>agent-list</command>
command shows details for a specified
@ -358,20 +307,17 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
<literal>bridge-mapping</literal> and
the number of virtual network devices on
this L2 agent.</para>
</listitem>
</orderedlist>
</listitem>
<listitem>
<para><emphasis role="bold">Manage assignment of
networks to DHCP agent</emphasis></para>
</step>
</procedure>
<procedure>
<title>Manage assignment of networks to DHCP agent</title>
<para>Now that you have run the
<command>net-list-on-dhcp-agent</command> and
<command>dhcp-agent-list-hosting-net</command>
commands, you can add a network to a DHCP agent
and remove one from it.</para>
<orderedlist>
<listitem>
<para>Default scheduling.</para>
<step>
<para>Default scheduling.</para>
<para>When you create a network with one port,
you can schedule it to an active DHCP
agent. If many active DHCP agents are
@ -398,8 +344,8 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
<systemitem class="service"
>dnsmasq</systemitem> service only if
there is a DHCP.</para>
</listitem>
<listitem>
</step>
<step>
<para>Assign a network to a given DHCP
agent.</para>
<para>To add another DHCP agent to host the
@ -416,8 +362,8 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
<para>Both DHCP agents host the
<literal>net2</literal>
network.</para>
</listitem>
<listitem>
</step>
<step>
<para>Remove a network from a specified DHCP
agent.</para>
<para>This command is the sibling command for
@ -436,19 +382,16 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
HostB is hosting the
<literal>net2</literal>
network.</para>
</listitem>
</orderedlist>
</listitem>
<listitem>
<para><emphasis role="bold">HA of DHCP
agents</emphasis></para>
</step>
</procedure>
<procedure>
<title>HA of DHCP agents</title>
<para>Boot a VM on net2. Let both DHCP agents host
<literal>net2</literal>. Fail the agents in
turn to see if the VM can still get the desired
IP.</para>
<orderedlist>
<listitem>
<para>Boot a VM on net2.</para>
<step>
<para>Boot a VM on net2.</para>
<screen><prompt>$</prompt> <userinput>neutron net-list</userinput>
<computeroutput>+--------------------------------------+------+--------------------------------------------------+
| id | name | subnets |
@ -467,8 +410,8 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
| c7c0481c-3db8-4d7a-a948-60ce8211d585 | myserver3 | ACTIVE | net1=10.0.1.5 |
| f62f4731-5591-46b1-9d74-f0c901de567f | myserver4 | ACTIVE | net2=9.0.1.2 |
+--------------------------------------+-----------+--------+---------------+</computeroutput></screen>
</listitem>
<listitem>
</step>
<step>
<para>Make sure both DHCP agents hosting
'net2'.</para>
<para>Use the previous commands to assign the
@ -480,10 +423,10 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True | :-) |
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | HostB | True | :-) |
+--------------------------------------+-------+----------------+-------+</computeroutput></screen>
</listitem>
<listitem>
<procedure>
<title>To test the HA</title>
</step>
</procedure>
<procedure>
<title>Test the HA</title>
<step>
<para>Log in to the
<literal>myserver4</literal> VM,
@ -518,11 +461,8 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
VM gets the wanted IP again.</para>
</step>
</procedure>
</listitem>
</orderedlist>
</listitem>
<listitem>
<para>Disable and remove an agent</para>
<procedure>
<title>Disable and remove an agent</title>
<para>An administrator might want to disable an agent
if a system hardware or software upgrade is
planned. Some agents that support scheduling also
@ -532,7 +472,7 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
agent. After the agent is disabled, you can safely
remove the agent. Remove the resources on the
agent before you delete the agent.</para>
<para>To run the following commands, you must stop the
<step><para>To run the following commands, you must stop the
DHCP agent on HostA.</para>
<screen><prompt>$</prompt> <userinput>neutron agent-update --admin-state-up False a0c1c21c-d4f4-4577-9ec7-908f2d48622d</userinput>
<prompt>$</prompt> <userinput>neutron agent-list</userinput>
@ -556,7 +496,7 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
+--------------------------------------+--------------------+-------+-------+----------------+</computeroutput></screen>
<para>After deletion, if you restart the DHCP agent,
it appears on the agent list again.</para>
</listitem>
</itemizedlist>
</section>
</step>
</procedure>
</section>
</section>

View File

@ -9,8 +9,8 @@
<section xml:id="under_the_hood_openvswitch">
<?dbhtml stop-chunking?>
<title>Open vSwitch</title>
<para>This section describes how the Open vSwitch plug-in implements the OpenStack
Networking abstractions.</para>
<para>This section describes how the Open vSwitch plug-in implements the Networking
abstractions.</para>
<section xml:id="under_the_hood_openvswitch_configuration">
<title>Configuration</title>
<para>This example uses VLAN isolation on the switches to isolate tenant networks. This
@ -35,7 +35,7 @@ bridge_mappings = physnet2:br-eth1</programlisting></para>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/under-the-hood-scenario-1.png" contentwidth="6in"/>
<imagedata fileref="../common/figures/under-the-hood-scenario-1.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>Under the <literal>service</literal> tenant, create the shared router, define the
@ -76,7 +76,7 @@ bridge_mappings = physnet2:br-eth1</programlisting></para>
</para>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/under-the-hood-scenario-1-ovs-compute.png" contentwidth="6in"/>
<imagedata fileref="../common/figures/under-the-hood-scenario-1-ovs-compute.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
@ -97,11 +97,10 @@ bridge_mappings = physnet2:br-eth1</programlisting></para>
is how hypervisors such as KVM and Xen implement a virtual network interface card
(typically called a VIF or vNIC). An ethernet frame sent to a TAP device is received
by the guest operating system.</para>
<para>A <emphasis role="italic">veth pair</emphasis> is a pair of virtual network
interfaces correctly directly together. An ethernet frame sent to one end of a veth
pair is received by the other end of a veth pair. OpenStack networking makes use of
veth pairs as virtual patch cables in order to make connections between virtual
bridges.</para>
<para>A <emphasis role="italic">veth pair</emphasis> is a pair of directly connected
virtual network interfaces. An ethernet frame sent to one end of a veth pair
is received by the other end of a veth pair. Networking uses veth pairs as
virtual patch cables to make connections between virtual bridges.</para>
<para>A <emphasis role="italic">Linux bridge</emphasis> behaves like a hub: you can
connect multiple (physical or virtual) network interfaces devices to a Linux bridge.
Any ethernet frames that come in from one interface attached to the bridge is
@ -113,10 +112,10 @@ bridge_mappings = physnet2:br-eth1</programlisting></para>
</simplesect>
<simplesect>
<title>Integration bridge</title>
<para>The <literal>br-int</literal> OpenvSwitch bridge is the integration bridge: all of
the guests running on the compute host connect to this bridge. OpenStack Networking
implements isolation across these guests by configuring the
<literal>br-int</literal> ports.</para>
<para>The <literal>br-int</literal> OpenvSwitch bridge is the integration bridge: all
guests running on the compute host connect to this bridge. Networking
implements isolation across these guests by configuring the
<literal>br-int</literal> ports.</para>
</simplesect>
<simplesect>
<title>Physical connectivity bridge</title>
@ -139,19 +138,19 @@ bridge_mappings = physnet2:br-eth1</programlisting></para>
<simplesect>
<title>Security groups: iptables and Linux bridges</title>
<para>Ideally, the TAP device <literal>vnet0</literal> would be connected directly to
the integration bridge, <literal>br-int</literal>. Unfortunately, this isn't
possible because of how OpenStack security groups are currently implemented.
OpenStack uses iptables rules on the TAP devices such as <literal>vnet0</literal> to
implement security groups, and Open vSwitch is not compatible with iptables rules
that are applied directly on TAP devices that are connected to an Open vSwitch
port.</para>
<para>OpenStack Networking uses an extra Linux bridge and a veth pair as a workaround for
this issue. Instead of connecting <literal>vnet0</literal> to an Open vSwitch
bridge, it is connected to a Linux bridge,
<literal>qbr<replaceable>XXX</replaceable></literal>. This bridge is
connected to the integration bridge, <literal>br-int</literal>, through the
<literal>(qvb<replaceable>XXX</replaceable>,
qvo<replaceable>XXX</replaceable>)</literal> veth pair.</para>
the integration bridge, <literal>br-int</literal>. Unfortunately, this isn't
possible because of how OpenStack security groups are currently implemented.
OpenStack uses iptables rules on the TAP devices such as
<literal>vnet0</literal> to implement security groups, and Open vSwitch
is not compatible with iptables rules that are applied directly on TAP
devices that are connected to an Open vSwitch port.</para>
<para>Networking uses an extra Linux bridge and a veth pair as a workaround for this
issue. Instead of connecting <literal>vnet0</literal> to an Open vSwitch
bridge, it is connected to a Linux bridge,
<literal>qbr<replaceable>XXX</replaceable></literal>. This bridge is
connected to the integration bridge, <literal>br-int</literal>, through the
<literal>(qvb<replaceable>XXX</replaceable>,
qvo<replaceable>XXX</replaceable>)</literal> veth pair.</para>
</simplesect>
</section>
<section xml:id="under_the_hood_openvswitch_scenario1_network">
@ -170,7 +169,7 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1</programlisting>
The following figure shows the network devices on the network host:</para>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/under-the-hood-scenario-1-ovs-network.png" contentwidth="6in"/>
<imagedata fileref="../common/figures/under-the-hood-scenario-1-ovs-network.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>As on the compute host, there is an Open vSwitch integration bridge
@ -187,99 +186,103 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1</programlisting>
packets traverse that veth pair in this example.</para></note>
<simplesect><title>Open vSwitch internal ports</title>
<para>The network host uses Open vSwitch <emphasis role="italic">internal
ports</emphasis>. Internal ports enable you to assign one
or more IP addresses to an Open vSwitch bridge. In previous example, the
<literal>br-int</literal> bridge has four internal
ports: <literal>tap<replaceable>XXX</replaceable></literal>,
<literal>qr-<replaceable>YYY</replaceable></literal>,
<literal>qr-<replaceable>ZZZ</replaceable></literal>,
<literal>tap<replaceable>WWW</replaceable></literal>. Each internal port has
a separate IP address associated with it. An internal port,
<literal>qg-VVV</literal>, is on the <literal>br-ex</literal> bridge.</para>
ports</emphasis>. Internal ports enable you to assign one or more IP
addresses to an Open vSwitch bridge. In previous example, the
<literal>br-int</literal> bridge has four internal ports:
<literal>tap<replaceable>XXX</replaceable></literal>,
<literal>qr-<replaceable>YYY</replaceable></literal>,
<literal>qr-<replaceable>ZZZ</replaceable></literal>, and
<literal>tap<replaceable>WWW</replaceable></literal>. Each internal
port has a separate IP address associated with it. An internal port,
<literal>qg-VVV</literal>, is on the <literal>br-ex</literal>
bridge.</para>
</simplesect>
<simplesect><title>DHCP agent</title>
<para>By default, The OpenStack Networking DHCP agent uses a program called dnsmasq
to provide DHCP services to guests. OpenStack Networking must create an internal
port for each network that requires DHCP services and attach a dnsmasq process to
that port. In the previous example, the interface
<literal>tap<replaceable>XXX</replaceable></literal> is on subnet
<literal>net01_subnet01</literal>, and the interface
<literal>tap<replaceable>WWW</replaceable></literal> is on
<literal>net02_subnet01</literal>.</para>
<para>By default, The Networking DHCP agent uses a process called dnsmasq to provide
DHCP services to guests. Networking must create an internal port for each
network that requires DHCP services and attach a dnsmasq process to that
port. In the previous example, the
<literal>tap<replaceable>XXX</replaceable></literal> interface is on
<literal>net01_subnet01</literal>, and the
<literal>tap<replaceable>WWW</replaceable></literal> interface is on
<literal>net02_subnet01</literal>.</para>
</simplesect>
<simplesect>
<title>L3 agent (routing)</title>
<para>The OpenStack Networking L3 agent implements routing through the use of Open
vSwitch internal ports and relies on the network host to route the packets across
the interfaces. In this example: interface<literal>qr-YYY</literal>, which is on
subnet <literal>net01_subnet01</literal>, has an IP address of 192.168.101.1/24,
interface <literal>qr-<replaceable>ZZZ</replaceable></literal>, which is on subnet
<literal>net02_subnet01</literal>, has an IP address of
<literal>192.168.102.1/24</literal>, and interface
<literal>qg-<replaceable>VVV</replaceable></literal>, which has an IP
address of <literal>10.64.201.254/24</literal>. Because of each of these interfaces
is visible to the network host operating system, it will route the packets
appropriately across the interfaces, as long as an administrator has enabled IP
forwarding.</para>
<para>The Networking L3 agent uses Open vSwitch internal ports to implement routing and
relies on the network host to route the packets across the interfaces. In
this example, the <literal>qr-YYY</literal> interface is on
<literal>net01_subnet01</literal> and has the IP address
192.168.101.1/24. The <literal>qr-<replaceable>ZZZ</replaceable></literal>,
interface is on <literal>net02_subnet01</literal> and has the IP address
<literal>192.168.102.1/24</literal>. The
<literal>qg-<replaceable>VVV</replaceable></literal> interface has
the IP address <literal>10.64.201.254/24</literal>. Because each of these
interfaces is visible to the network host operating system, the network host
routes the packets across the interfaces, as long as an administrator has
enabled IP forwarding.</para>
<para>The L3 agent uses iptables to implement floating IPs to do the network address
translation (NAT).</para>
</simplesect>
<simplesect>
<title>Overlapping subnets and network namespaces</title>
<para>One problem with using the host to implement routing is that there is a chance
that one of the OpenStack Networking subnets might overlap with one of the physical
networks that the host uses. For example, if the management network is implemented
on <literal>eth2</literal> (not shown in the previous example), by coincidence happens
to also be on the <literal>192.168.101.0/24</literal> subnet, then this will cause
routing problems because it is impossible ot determine whether a packet on this
subnet should be sent to <literal>qr-YYY</literal> or <literal>eth2</literal>. In
general, if end-users are permitted to create their own logical networks and
subnets, then the system must be designed to avoid the possibility of such
collisions.</para>
<para>OpenStack Networking uses Linux <emphasis role="italic">network namespaces
</emphasis>to prevent collisions between the physical networks on the network host,
and the logical networks used by the virtual machines. It also prevents collisions
across different logical networks that are not routed to each other, as you will see
in the next scenario.</para>
<para>A network namespace can be thought of as an isolated environment that has its own
networking stack. A network namespace has its own network interfaces, routes, and
iptables rules. You can think of like a chroot jail, except for networking instead
of a file system. As an aside, LXC (Linux containers) use network namespaces to
implement networking virtualization.</para>
<para>OpenStack Networking creates network namespaces on the network host in order
to avoid subnet collisions.</para>
<para>Tn this example, there are three network namespaces, as depicted in the following figure.<itemizedlist>
<listitem>
<para><literal>qdhcp-<replaceable>aaa</replaceable></literal>: contains the
<literal>tap<replaceable>XXX</replaceable></literal> interface
and the dnsmasq process that listens on that interface, to provide DHCP
services for <literal>net01_subnet01</literal>. This allows overlapping
IPs between <literal>net01_subnet01</literal> and any other subnets on
the network host.</para>
</listitem>
<listitem>
<para><literal>qrouter-<replaceable>bbbb</replaceable></literal>: contains
the <literal>qr-<replaceable>YYY</replaceable></literal>,
<literal>qr-<replaceable>ZZZ</replaceable></literal>, and
<literal>qg-<replaceable>VVV</replaceable></literal> interfaces,
and the corresponding routes. This namespace implements
<literal>router01</literal> in our example.</para>
</listitem>
<listitem>
<para><literal>qdhcp-<replaceable>ccc</replaceable></literal>: contains the
<literal>tap<replaceable>WWW</replaceable></literal> interface
and the dnsmasq process that listens on that interface, to provide DHCP
services for <literal>net02_subnet01</literal>. This allows overlapping
IPs between <literal>net02_subnet01</literal> and any other subnets on
the network host.</para>
</listitem>
</itemizedlist></para>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/under-the-hood-scenario-1-ovs-netns.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>One problem with using the host to implement routing is that one of the
Networking subnets might overlap with one of the physical networks that the
host uses. For example, if the management network is implemented on
<literal>eth2</literal> and also happens to be on the
<literal>192.168.101.0/24</literal> subnet, routing problems will occur
because the host can't determine whether to send a packet on this subnet to
<literal>qr-YYY</literal> or <literal>eth2</literal>. If end users are
permitted to create their own logical networks and subnets, you must design
the system so that such collisions do not occur.</para>
<para>Networking uses Linux <emphasis role="italic">network namespaces </emphasis>to
prevent collisions between the physical networks on the network host, and
the logical networks used by the virtual machines. It also prevents
collisions across different logical networks that are not routed to each
other, as the following scenario shows.</para>
<para>A network namespace is an isolated environment with its own networking stack. A
network namespace has its own network interfaces, routes, and iptables
rules. Consider it a chroot jail, except for networking instead of for a
file system. LXC (Linux containers) use network namespaces to implement
networking virtualization.</para>
<para>Networking creates network namespaces on the network host to avoid subnet
collisions.</para>
<mediaobject>
<imageobject>
<imagedata fileref="../common/figures/under-the-hood-scenario-1-ovs-netns.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>In this example, there are three network namespaces, as shown in the figure above:<itemizedlist>
<listitem>
<para><literal>qdhcp-<replaceable>aaa</replaceable></literal>:
contains the
<literal>tap<replaceable>XXX</replaceable></literal>
interface and the dnsmasq process that listens on that interface
to provide DHCP services for <literal>net01_subnet01</literal>.
This allows overlapping IPs between
<literal>net01_subnet01</literal> and any other subnets on
the network host.</para>
</listitem>
<listitem>
<para><literal>qrouter-<replaceable>bbbb</replaceable></literal>:
contains the
<literal>qr-<replaceable>YYY</replaceable></literal>,
<literal>qr-<replaceable>ZZZ</replaceable></literal>,
and <literal>qg-<replaceable>VVV</replaceable></literal>
interfaces, and the corresponding routes. This namespace
implements <literal>router01</literal> in our example.</para>
</listitem>
<listitem>
<para><literal>qdhcp-<replaceable>ccc</replaceable></literal>:
contains the
<literal>tap<replaceable>WWW</replaceable></literal>
interface and the dnsmasq process that listens on that
interface, to provide DHCP services for
<literal>net02_subnet01</literal>. This allows overlapping
IPs between <literal>net02_subnet01</literal> and any other
subnets on the network host.</para>
</listitem>
</itemizedlist></para>
</simplesect>
</section>
</section>
@ -292,7 +295,7 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1</programlisting>
</para>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/under-the-hood-scenario-2.png" contentwidth="6in"/>
<imagedata fileref="../common/figures/under-the-hood-scenario-2.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>Under the <literal>service</literal> tenant, define the public
@ -334,7 +337,7 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1</programlisting>
</para>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/under-the-hood-scenario-2-ovs-compute.png" contentwidth="6in"/>
<imagedata fileref="../common/figures/under-the-hood-scenario-2-ovs-compute.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<note><para>The Compute host configuration resembles the
@ -349,7 +352,7 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1</programlisting>
scenario.</para>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/under-the-hood-scenario-2-ovs-network.png" contentwidth="6in"/>
<imagedata fileref="../common/figures/under-the-hood-scenario-2-ovs-network.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>In this configuration, the network namespaces are
@ -358,7 +361,7 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1</programlisting>
</para>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/under-the-hood-scenario-2-ovs-netns.png" contentwidth="6in"/>
<imagedata fileref="../common/figures/under-the-hood-scenario-2-ovs-netns.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>In this scenario, there are four network namespaces
@ -373,8 +376,8 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1</programlisting>
</section>
<section xml:id="under_the_hood_linuxbridge">
<title>Linux Bridge</title>
<para>This section describes how the Linux Bridge plug-in implements the OpenStack
Networking abstractions. For information about DHCP and L3 agents, see <xref
<para>This section describes how the Linux Bridge plug-in implements the Networking
abstractions. For information about DHCP and L3 agents, see <xref
linkend="under_the_hood_openvswitch_scenario1"/>.</para>
<section xml:id="under_the_hood_linuxbridge_configuration">
<title>Configuration</title>
@ -400,7 +403,7 @@ physical_interface_mappings = physnet2:eth1</programlisting></para>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/under-the-hood-scenario-1.png" contentwidth="6in"/>
<imagedata fileref="../common/figures/under-the-hood-scenario-1.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>Under the <literal>service</literal> tenant, create the shared router, define the
@ -440,7 +443,7 @@ physical_interface_mappings = physnet2:eth1</programlisting></para>
</para>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/under-the-hood-scenario-1-linuxbridge-compute.png" contentwidth="6in"/>
<imagedata fileref="../common/figures/under-the-hood-scenario-1-linuxbridge-compute.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
@ -478,14 +481,14 @@ physical_interface_mappings = physnet2:eth1</programlisting></para>
<para>The following figure shows the network devices on the network host.</para>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/under-the-hood-scenario-1-linuxbridge-network.png" contentwidth="6in"/>
<imagedata fileref="../common/figures/under-the-hood-scenario-1-linuxbridge-network.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>The following figure shows how the Linux Bridge plug-in uses network namespaces to
provide isolation.</para><note><para>veth pairs form connections between the
Linux bridges and the network namespaces.</para></note><mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/under-the-hood-scenario-1-linuxbridge-netns.png" contentwidth="6in"/>
<imagedata fileref="../common/figures/under-the-hood-scenario-1-linuxbridge-netns.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
</section>
@ -497,7 +500,7 @@ physical_interface_mappings = physnet2:eth1</programlisting></para>
Internet.</para>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/under-the-hood-scenario-2.png" contentwidth="6in"/>
<imagedata fileref="../common/figures/under-the-hood-scenario-2.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>Under the <literal>service</literal> tenant, define the public
@ -540,7 +543,7 @@ physical_interface_mappings = physnet2:eth1</programlisting></para>
</para>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/under-the-hood-scenario-2-linuxbridge-compute.png" contentwidth="6in"/>
<imagedata fileref="../common/figures/under-the-hood-scenario-2-linuxbridge-compute.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<note><para>The configuration on the compute host is very similar to the configuration in scenario 1. The
@ -553,7 +556,7 @@ physical_interface_mappings = physnet2:eth1</programlisting></para>
scenario.</para>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/under-the-hood-scenario-2-linuxbridge-network.png" contentwidth="6in"/>
<imagedata fileref="../common/figures/under-the-hood-scenario-2-linuxbridge-network.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>The main difference between the configuration in this scenario and the previous one
@ -561,7 +564,7 @@ physical_interface_mappings = physnet2:eth1</programlisting></para>
across the two subnets, as shown in the following figure.</para>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/under-the-hood-scenario-2-linuxbridge-netns.png" contentwidth="6in"/>
<imagedata fileref="../common/figures/under-the-hood-scenario-2-linuxbridge-netns.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>In this scenario, there are four network namespaces
@ -592,7 +595,7 @@ physical_interface_mappings = physnet2:eth1</programlisting></para>
illustrated below.</para>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/ml2_without_l2pop_full_mesh.png"
<imagedata fileref="../common/figures/ml2_without_l2pop_full_mesh.png"
contentwidth="6in"/>
</imageobject>
</mediaobject>
@ -602,7 +605,7 @@ physical_interface_mappings = physnet2:eth1</programlisting></para>
This is achieved by sending broadcasts packets over unicasts only to the relevant
agents as illustrated below.<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/ml2_without_l2pop_partial_mesh.png"
<imagedata fileref="../common/figures/ml2_without_l2pop_partial_mesh.png"
contentwidth="6in"/>
</imageobject>
</mediaobject>The partial-mesh is available with the <emphasis>Open vSwitch</emphasis> and

View File

@ -233,7 +233,7 @@
actions for users with the admin role. An authorized
client or an administrative user can view and set the
provider extended attributes through Networking API
calls. See <xref linkend="section_auth"/> for details
calls. See <xref linkend="section_networking_auth"/> for details
on policy configuration.</para>
</section>
<section xml:id="provider_api_workflow">

View File

@ -8,12 +8,11 @@
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
xmlns:ns="http://docbook.org/ns/docbook">
<title>Networking</title>
<para>This chapter explains the configuration options and scenarios for OpenStack Networking.
For installation prerequisites, steps, and use cases, refer to corresponding chapter in the
<emphasis role="italic">OpenStack Installation Guide</emphasis>.</para>
<para>This chapter explains the OpenStack Networking configuration options. For installation
prerequisites, steps, and use cases, see the <citetitle>OpenStack Installation
Guide</citetitle> for your distribution (<link xlink:href="docs.openstack.org"
>docs.openstack.org</link>) and <citetitle><link
xlink:href="http://docs.openstack.org/admin-guide-cloud/content/">Cloud
Administrator Guide</link></citetitle>.</para>
<xi:include href="networking/section_networking-options-reference.xml"/>
<xi:include href="networking/section_networking-config-identity.xml"/>
<xi:include href="networking/section_networking-scenarios.xml"/>
<xi:include href="networking/section_networking-adv-config.xml"/>
<xi:include href="networking/section_networking-multi-dhcp-agents.xml"/>
</chapter>