Move install and config info from Network Admin Guide to Install Guide and Config Ref

Applied systemitem and programlisting tags where indicated by reviewers
Deleted extra spaces after =s and also other white spaces
Fixed a couple of typos - major copyedit to be done under another bug

Closes-bug: 1223542
Change-Id: Ia4ecbdb304e18a42769b50d52d33e3757710fe5b
Author: Nermina Miller
This commit is contained in:
nerminamiller 2013-09-22 02:02:31 -04:00 committed by annegentle
parent 7208d6b7bd
commit b390e59445
47 changed files with 387 additions and 3203 deletions

File diff suppressed because it is too large Load Diff

View File

@ -1,15 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<appendix xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="app_demo">
<title>Demos Setup</title>
<para>This section describes how to configure the OpenStack
Networking service and its components for some typical use
cases.</para>
<xi:include href="app_demo_flat.xml"/>
<?hard-pagebreak?>
<xi:include href="app_demo_single_router.xml"/>
<?hard-pagebreak?>
<xi:include href="app_demo_routers_with_private_networks.xml" />
<?hard-pagebreak?>
<xi:include href="app_demo_multi_dhcp_agents.xml"/>
</appendix>

View File

@ -1,31 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<appendix xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="app_pagination_and_sorting_support">
<title>Plugin pagination and sorting support</title>
<table rules="all">
<caption>The plugins are supporting native pagination and
sorting</caption>
<thead>
<tr>
<th>Plugin</th>
<th>Support Native Pagination</th>
<th>Support Native Sorting</th>
</tr>
</thead>
<tbody>
<tr>
<td>Open vSwitch</td>
<td>True</td>
<td>True</td>
</tr>
<tr>
<td>LinuxBridge</td>
<td>True</td>
<td>True</td>
</tr>
</tbody>
</table>
</appendix>

View File

@ -50,7 +50,7 @@
<revdescription>
<itemizedlist spacing="compact">
<listitem>
<para>Added <xref linkend="ch_under_the_hood"/>.</para>
<para>Added ch_under_the_hood.</para>
</listitem>
</itemizedlist>
</revdescription>
@ -131,19 +131,5 @@
</revision>
</revhistory>
</info>
<xi:include href="ch_preface.xml"/>
<xi:include href="ch_overview.xml"/>
<xi:include href="ch_install.xml"/>
<xi:include href="ch_config.xml"/>
<xi:include href="ch_using.xml"/>
<xi:include href="ch_under_the_hood.xml"/>
<xi:include href="ch_adv_features.xml"/>
<xi:include href="ch_adv_config.xml"/>
<xi:include href="ch_auth.xml"/>
<xi:include href="ch_adv_operational_features.xml"/>
<xi:include href="ch_high_avail.xml"/>
<xi:include href="ch_limitations.xml"/>
<xi:include href="app_demo.xml"/>
<xi:include href="app_core.xml"/>
<xi:include href="app_pagination_and_sorting_support.xml"/>
</book>

View File

@ -202,7 +202,7 @@
<code>extension:provider_network:set</code>
action. The default OpenStack Networking API policy
configuration authorizes both actions for users with
the admin role. See <xref linkend="ch_auth"/> for
the admin role. See ch_auth for
details on policy configuration.</para>
</section>
<section xml:id="provider_api_workflow">
@ -228,23 +228,9 @@
<para>
<screen><prompt>$</prompt> <userinput>neutron net-create &lt;name&gt; --tenant_id &lt;tenant-id&gt; --provider:network_type gre --provider:segmentation_id &lt;tunnel-id&gt;</userinput></screen>
</para>
<para>When creating flat networks or VLAN networks,
&lt;phys-net-name&gt; must be known to the plugin. See
<xref linkend="ovs_neutron_plugin"/> and <xref
linkend="linuxbridge_conf"/> for details on
configuring network_vlan_ranges to identify all
physical networks. When creating VLAN networks,
&lt;VID&gt; can fall either within or outside any
configured ranges of VLAN IDs from which tenant
networks are allocated. Similarly, when creating GRE
networks, &lt;tunnel-id&gt; can fall either within or
outside any tunnel ID ranges from which tenant
networks are allocated.</para>
<para>Once provider networks have been created, subnets
can be allocated and they can be used similarly to
other virtual networks, subject to authorization
policy based on the specified
&lt;tenant_id&gt;.</para>
<para>When creating flat networks or VLAN networks, &lt;phys-net-name&gt; must be known to the plugin. See ovs_neutron_plugin and linuxbridge_conf for details on configuring network_vlan_ranges to identify all physical networks. When creating VLAN networks, &lt;VID&gt; can fall either within or outside any configured ranges of VLAN IDs from which tenant networks are allocated. Similarly, when creating GRE networks, &lt;tunnel-id&gt; can fall either within or outside any tunnel ID ranges from which tenant networks are allocated.</para>
<para>Once provider networks have been created, subnets can be allocated and they can be used similarly to other virtual networks, subject to authorization
policy based on the specified &lt;tenant_id&gt;.</para>
</section>
</section>
<section xml:id="l3_router_and_nat">
@ -259,8 +245,8 @@
networks, and can also provide a "gateway" that connects
one or more private L2 networks to a shared "external"
network (e.g., a public network for access to the
Internet). See <xref linkend="use_cases_single_router"/>
and <xref linkend="use_cases_tenant_router"/> for details
Internet). See use_cases_single_router
and use_cases_tenant_router for details
on common models of deploying OpenStack Networking L3
routers.</para>
<para>The L3 router provides basic NAT capabilities on
@ -508,27 +494,17 @@ neutron floatingip-associate &lt;floatingip-id&gt; &lt;internal VM port-id&gt; <
change is made restart <systemitem class="service">nova-api</systemitem> and <systemitem class="service">nova-compute</systemitem> in order to pick up this change. After
this change is made one will be able to use both the OpenStack Compute and OpenStack
Network security group API at the same time.</para>
<note>
<itemizedlist>
<listitem><para>To use the OpenStack Compute security group
API with OpenStack Networking, the OpenStack Networking
plugin must implement the security group API. The
following plugins currently implement this: Nicira
NVP, Open vSwitch, Linux Bridge, NEC, and Ryu.</para></listitem>
<listitem><para>You must configure the correct firewall driver in the
<literal>securitygroup</literal> section of the plugin/agent configuration file.
Some plugins and agents, such as Linux Bridge Agent and Open vSwitch Agent, use
the no-operation driver as the default, which results in non-working security
groups.</para></listitem>
<listitem><para>When using the security group API through OpenStack
Compute, security groups are applied to all ports on
an instance. The reason for this is that OpenStack
Compute security group APIs are instances based and
not port based as OpenStack Networking.</para></listitem>
</itemizedlist>
</note>
<note>
<itemizedlist>
<listitem><para>To use the OpenStack Compute security group API with OpenStack Networking, the OpenStack Networking plugin must implement the security group API. The
following plugins currently implement this: Nicira NVP, Open vSwitch, Linux Bridge, NEC, and Ryu.</para>
</listitem>
<listitem><para>You must configure the correct firewall driver in the <literal>securitygroup</literal> section of the plugin/agent configuration file. Some plugins and agents, such as Linux Bridge Agent and Open vSwitch Agent, use the no-operation driver as the default, which results in non-working security groups.</para>
</listitem>
<listitem><para>When using the security group API through OpenStack Compute, security groups are applied to all ports on an instance. The reason for this is that OpenStack Compute security group APIs are instances based and not port based as OpenStack Networking.</para>
</listitem>
</itemizedlist>
</note>
<section xml:id="securitygroup_api_abstractions">
<title>Security Group API Abstractions</title>
<table rules="all">

View File

@ -1,183 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_auth">
<title>Authentication and Authorization</title>
<para>OpenStack Networking uses the OpenStack Identity Service
(project name keystone) as the default authentication service.
When OpenStack Identity is enabled, users who submit requests
to the OpenStack Networking service must provide an
authentication token in X-Auth-Token request header. Users get
this token by authenticating with the OpenStack Identity
endpoint. For more information about authentication with
OpenStack Identity Service, see the OpenStack Identity
documentation.</para>
<para>When OpenStack Identity is enabled, it is not mandatory to
specify tenant_id for resources in create requests because the
tenant identifier is derived from the authentication
token.</para>
<note>
<para>The default authorization settings only allow
administrative users to create resources on behalf of a
different tenant.</para>
</note>
<para>OpenStack Networking uses information received from
OpenStack Identity to authorize user requests. OpenStack
Networking handles two kind of authorization policies:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Operation-based</emphasis>:
policies specify access criteria for specific
operations, possibly with fine-grained control over
specific attributes; </para>
</listitem>
<listitem>
<para><emphasis role="bold">Resource-based:</emphasis>
whether access to specific resource might be granted
or not according to the permissions configured for the
resource (currently available only for the network
resource). The actual authorization policies enforced
in OpenStack Networking might vary from deployment to
deployment.</para>
</listitem>
</itemizedlist>
<para>The policy engine reads entries from the <emphasis
role="italic">policy.json</emphasis> file. The actual
location of this file might vary from distribution to
distribution. Entries can be updated while the system is
running, and no service restart is required. That is to say,
every time the policy file is updated, the policies will be
automatically reloaded. Currently the only way of updating
such policies is to edit the policy file.</para>
<para>In this section, the terms "policy" and "rule" both refer to
objects that are specified in the same way in the policy file;
there are no syntax differences between a rule and a policy. A
policy is something that is matched directly from the
OpenStack Networking policy engine. A rule is a component of
policies, which are then evaluated. For instance in
<code>create_subnet: [["admin_or_network_owner"]]</code>,
<emphasis role="italic">create_subnet</emphasis> is a
policy, and <emphasis role="italic"
>admin_or_network_owner</emphasis> is a rule.</para>
<para>Policies are triggered by the OpenStack Networking policy
engine whenever one of them matches an OpenStack Networking
API operation or a specific attribute being used in a given
operation. For instance the <code>create_subnet</code> policy
is triggered every time a <code>POST /v2.0/subnets</code>
request is sent to the OpenStack Networking server; on the
other hand <code>create_network:shared</code> is triggered
every time the <emphasis role="italic">shared</emphasis>
attribute is explicitly specified (and set to a value
different from its default) in a <code>POST
/v2.0/networks</code> request. It is also worth mentioning
that policies can be also related to specific API extensions;
for instance <code>extension:provider_network:set</code> will
be triggered if the attributes defined by the Provider Network
extensions are specified in an API request.</para>
<para>An authorization policy can be composed by one or more
rules. If more rules are specified, evaluation policy will be
successful if any of the rules evaluates successfully; if an
API operation matches multiple policies, all the policies must
evaluate successfully. Also, authorization rules are
recursive. Once a rule is matched, it can be resolved to
another rule until a terminal rule is reached.</para>
<para>The OpenStack Networking policy engine currently defines the
following kinds of terminal rules:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Role-based rules</emphasis>:
evaluate successfully if the user submitting the
request has the specified role. For instance
<code>"role:admin"</code>is successful if the user
submitting the request is an administrator.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Field-based rules:
</emphasis>evaluate successfully if a field of the
resource specified in the current request matches a
specific value. For instance
<code>"field:networks:shared=True"</code> is
successful if the attribute <emphasis role="italic"
>shared</emphasis> of the <emphasis role="italic"
>network</emphasis> resource is set to
true.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Generic rules:</emphasis>
compare an attribute in the resource with an attribute
extracted from the user's security credentials and
evaluates successfully if the comparison is
successful. For instance
<code>"tenant_id:%(tenant_id)s"</code> is
successful if the tenant identifier in the resource is
equal to the tenant identifier of the user submitting
the request.</para>
</listitem>
</itemizedlist>
<para>The following is an extract from the default policy.json
file:</para>
<programlisting language="bash">{
[1] "admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]],
"admin_or_network_owner": [["role:admin"], ["tenant_id:%(network_tenant_id)s"]],
"admin_only": [["role:admin"]], "regular_user": [],
"shared": [["field:networks:shared=True"]],
[2] "default": [["rule:admin_or_owner"]],
"create_subnet": [["rule:admin_or_network_owner"]],
"get_subnet": [["rule:admin_or_owner"], ["rule:shared"]],
"update_subnet": [["rule:admin_or_network_owner"]],
"delete_subnet": [["rule:admin_or_network_owner"]],
"create_network": [],
[3] "get_network": [["rule:admin_or_owner"], ["rule:shared"]],
[4] "create_network:shared": [["rule:admin_only"]],
"update_network": [["rule:admin_or_owner"]],
"delete_network": [["rule:admin_or_owner"]],
"create_port": [],
[5] "create_port:mac_address": [["rule:admin_or_network_owner"]],
"create_port:fixed_ips": [["rule:admin_or_network_owner"]],
"get_port": [["rule:admin_or_owner"]],
"update_port": [["rule:admin_or_owner"]],
"delete_port": [["rule:admin_or_owner"]]
}</programlisting>
<para>[1] is a rule which evaluates successfully if the current
user is an administrator or the owner of the resource
specified in the request (tenant identifier is equal).</para>
<para>[2] is the default policy which is always evaluated if an
API operation does not match any of the policies in
policy.json.</para>
<para>[3] This policy will evaluate successfully if either
<emphasis role="italic">admin_or_owner</emphasis>, or
<emphasis role="italic">shared</emphasis> evaluates
successfully.</para>
<para>[4] This policy will restrict the ability of manipulating
the <emphasis role="italic">shared</emphasis> attribute for a
network to administrators only.</para>
<para>[5] This policy will restrict the ability of manipulating
the <emphasis role="italic">mac_address</emphasis> attribute
for a port only to administrators and the owner of the network
where the port is attached.</para>
<para>In some cases, some operations should be restricted to
administrators only; therefore, as a further example, let us
consider how this sample policy file should be modified in a
scenario where tenants are allowed only to define networks and
see their resources, and all the other operations can be
performed only in an administrative context:</para>
<programlisting language="bash">{
"admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]],
"admin_only": [["role:admin"]], "regular_user": [],
"default": [["rule:admin_only"]],
"create_subnet": [["rule:admin_only"]],
"get_subnet": [["rule:admin_or_owner"]],
"update_subnet": [["rule:admin_only"]],
"delete_subnet": [["rule:admin_only"]],
"create_network": [],
"get_network": [["rule:admin_or_owner"]],
"create_network:shared": [["rule:admin_only"]],
"update_network": [["rule:admin_or_owner"]],
"delete_network": [["rule:admin_or_owner"]],
"create_port": [["rule:admin_only"]],
"get_port": [["rule:admin_or_owner"]],
"update_port": [["rule:admin_only"]],
"delete_port": [["rule:admin_only"]]
}</programlisting>
</chapter>

View File

@ -1,36 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="ch_high_avail">
<title>High Availability</title>
<para>Several aspects of an OpenStack Networking deployment benefit from high-availabilty to
withstand individual node failures. In general, neutron-server and neutron-dhcp-agent can be
run in an active-active fashion. The neutron-l3-agent service can be run only as
active/passive, to avoid IP conflicts with respect to gateway IP addresses.</para>
<section xml:id="ha_pacemaker">
<title>OpenStack Networking High Availability with
Pacemaker</title>
<para>You can run some OpenStack Networking services into a
cluster (Active / Passive or Active / Active for OpenStack
Networking Server only) with Pacemaker.</para>
<para>Here you can download the latest Resources Agents :<itemizedlist>
<listitem>
<para>neutron-server: <link
xlink:href="https://github.com/madkiss/openstack-resource-agents/blob/master/ocf/neutron-server"
>https://github.com/madkiss/openstack-resource-agents</link></para>
</listitem>
<listitem>
<para>neutron-dhcp-agent : <link
xlink:href="https://github.com/madkiss/openstack-resource-agents/blob/master/ocf/neutron-agent-dhcp"
>https://github.com/madkiss/openstack-resource-agents</link>   </para>
</listitem>
<listitem>
<para>neutron-l3-agent : <link
xlink:href="https://github.com/madkiss/openstack-resource-agents/blob/master/ocf/neutron-agent-l3"
>https://github.com/madkiss/openstack-resource-agents</link>   </para>
</listitem>
</itemizedlist></para>
<db:note xmlns:db="http://docbook.org/ns/docbook"><db:para> If you need more informations about "<emphasis role="italic">How to build a
cluster</emphasis>", please refer to <link
xlink:href="http://www.clusterlabs.org/wiki/Documentation">Pacemaker
documentation</link>.</db:para></db:note>
</section>
</chapter>

View File

@ -1,87 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_limitations">
<title>Limitations</title>
<para>
<itemizedlist>
<listitem>
<para><emphasis>No equivalent for nova-network
--multi_host flag:</emphasis> Nova-network has
a model where the L3, NAT, and DHCP processing
happen on the compute node itself, rather than a
dedicated networking node. OpenStack Networking
now support running multiple l3-agent and dhcp-agents
with load being split across those agents, but the
tight coupling of that scheduling with the location of
the VM is not supported in Grizzly. The Havana release is expected
to include an exact replacement for the --multi_host flag
in nova-network.</para>
</listitem>
<listitem>
<para><emphasis>Linux network namespace required on nodes running neutron-l3-agent
or neutron-dhcp-agent if overlapping IPs are in use: </emphasis>. In order
to support overlapping IP addresses, the OpenStack Networking DHCP and L3 agents
use Linux network namespaces by default. The hosts running these processes must
support network namespaces. To support network namespaces, the following are
required:</para>
<itemizedlist>
<listitem>
<para>Linux kernel 2.6.24 or newer (with CONFIG_NET_NS=y in kernel
configuration) and</para>
</listitem>
<listitem>
<para>iproute2 utilities ('ip' command) version 3.1.0 (aka 20111117) or
newer</para>
</listitem>
</itemizedlist>
<para>To check whether your host supports namespaces try running the following as
root:</para>
<screen><computeroutput>ip netns add test-ns
ip netns exec test-ns ifconfig</computeroutput></screen>
<para>If the preceding commands do not produce errors, your platform is likely
sufficient to use the dhcp-agent or l3-agent with namespace. In our experience,
Ubuntu 12.04 or later support namespaces as does Fedora 17 and new, but some
older RHEL platforms do not by default. It may be possible to upgrade the
iproute2 package on a platform that does not support namespaces by default.</para>
<para>If you need to disable namespaces, make sure the
<filename>neutron.conf</filename> used by neutron-server has the following
setting:</para>
<screen><computeroutput>allow_overlapping_ips=False</computeroutput></screen>
<para>and that the dhcp_agent.ini and l3_agent.ini have the following
setting:</para>
<screen><computeroutput>use_namespaces=False</computeroutput></screen>
<note><para>If the host does not support namespaces then the <systemitem class="service"
>neutron-l3-agent</systemitem> and <systemitem class="service"
>neutron-dhcp-agent</systemitem> should be run on different hosts. This
is due to the fact that there is no isolation between the IP addresses
created by the L3 agent and by the DHCP agent. By manipulating the routing
the user can ensure that these networks have access to one another.</para></note>
<para>If you run both L3 + DHCP services on the same node, you should enable
namespaces to avoid conflicts with routes :</para>
<screen><computeroutput>use_namespaces=True</computeroutput></screen>
</listitem>
</itemizedlist>
<itemizedlist><listitem>
<para><emphasis>No IPv6 support for L3 agent:</emphasis> The neutron-l3-agent, used
by many plugins to implement L3 forwarding, supports only IPv4 forwarding.
Currently, There are no errors provided if you configure IPv6 addresses via the
API.</para>
</listitem>
<listitem>
<para><emphasis>ZeroMQ support is experimental</emphasis>: Some agents, including
neutron-dhcp-agent, neutron-openvswitch-agent, and neutron-linuxbridge-agent use
RPC to communicate. ZeroMQ is an available option in the configuration file, but
has not been tested and should be considered experimental. In particular, there
are believed to be issues with ZeroMQ and the dhcp agent.</para>
</listitem><listitem>
<para><emphasis>MetaPlugin is experimental</emphasis>: This release includes a
"MetaPlugin" that is intended to support multiple plugins at the same time for
different API requests, based on the content of those API requests. This
functionality has not been widely reviewed or tested by the core team, and
should be considered experimental until further validation is performed.</para>
</listitem>
</itemizedlist>
</para>
</chapter>

View File

@ -1,623 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_overview">
<title>Overview</title>
<para>This chapter describes the high-level concepts and
components of an OpenStack Networking deployment.</para>
<section xml:id="WhatIsNeutron">
<title>What is OpenStack Networking?</title>
<para>The OpenStack Networking project was created to provide a rich
API for defining network connectivity and
addressing in the cloud. The OpenStack Networking project gives
operators the ability to leverage different networking
technologies to power their cloud networking.   </para>
<para>For a detailed description of the OpenStack Networking API
abstractions and their attributes, see the <link
xlink:href="http://docs.openstack.org/api/openstack-network/2.0/content/"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:m="http://www.w3.org/1998/Math/MathML"
xmlns:html="http://www.w3.org/1999/xhtml"
xmlns:db="http://docbook.org/ns/docbook"
><citetitle>OpenStack Networking API Guide
(v2.0)</citetitle></link>.</para>
<section xml:id="rich_network">
<title>OpenStack Networking API: Rich Control over Network Functionality</title>
<para>OpenStack Networking is a virtual network service that provides a
powerful API to define the network connectivity and
addressing used by devices from other services, such
as OpenStack Compute.   </para>
<para>The OpenStack Compute API has a virtual server
abstraction to describe computing resources. Similarly,
the OpenStack Networking API has virtual network, subnet, and port
abstractions to describe networking resources. In more
detail: <itemizedlist>
<listitem>
<para><emphasis role="bold">Network</emphasis>.
An isolated L2
segment, analogous to VLAN in the physical
networking world.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Subnet</emphasis>.
A block of v4 or v6 IP addresses and
associated configuration state.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Port</emphasis>. A
connection point for attaching a single
device, such as the NIC of a virtual
server, to a virtual network. Also
describes the associated network
configuration, such as the MAC and IP
addresses to be used on that port.  
</para>
</listitem>
</itemizedlist>
You can configure rich network
topologies by creating and configuring networks and
subnets, and then instructing other OpenStack services
like OpenStack Compute to attach virtual devices to ports on these
networks.  In particular, OpenStack Networking supports each tenant
having multiple private networks, and allows tenants
to choose their own IP addressing scheme (even if
those IP addresses overlap with those used by other
tenants). The OpenStack Networking service:
<itemizedlist>
<listitem>
<para>Enables advanced cloud networking
use cases, such as building multi-tiered web
applications and allowing applications to be migrated
to the cloud without changing IP addresses.
</para>
</listitem>
<listitem>
<para>
Offers flexibility for the cloud administrator
to customized network offerings.
</para>
</listitem>
<listitem>
<para>Provides a mechanism that lets
cloud administrators expose additional API
capabilities through API extensions.  Commonly, new
capabilities are first introduced as an API extension,
and over time become part of the core OpenStack Networking API.
</para>
</listitem>
</itemizedlist>
</para>
</section>
<!-- <?hard-pagebreak?> -->
<section xml:id="flexibility">
<title>Plugin Architecture: Flexibility to Choose Different Network
Technologies</title>
<para>Enhancing traditional networking solutions to
provide rich cloud networking is challenging.
Traditional networking is not designed to scale to
cloud proportions nor to handle automatic configuration.</para>
<para>The original OpenStack Compute network implementation assumed a
very basic model of performing all isolation through
Linux VLANs and IP tables. OpenStack Networking introduces the
concept of a <emphasis role="italic"
>plugin</emphasis>, which is a back-end
implementation of the OpenStack Networking API. A plugin can use a
variety of technologies to implement the logical API
requests.  Some OpenStack Networking plugins might use basic Linux
VLANs and IP tables, while others might use more
advanced technologies, such as L2-in-L3 tunneling or
OpenFlow, to provide similar benefits.</para>
<para>The following plugins are currently included in the OpenStack Networking distribution: <itemizedlist>
<listitem>
<para><emphasis role="bold">Big Switch Plugin (Floodlight REST Proxy)</emphasis>.
<link
xlink:href="http://www.openflowhub.org/display/floodlightcontroller/Quantum+REST+Proxy+Plugin"
>http://www.openflowhub.org/display/floodlightcontroller/Quantum+REST+Proxy+Plugin</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Brocade Plugin</emphasis>.
<link
xlink:href="https://github.com/brocade/brocade"
>https://github.com/brocade/brocade</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Cisco</emphasis>.
<link
xlink:href="http://wiki.openstack.org/cisco-neutron"
>http://wiki.openstack.org/cisco-neutron</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Cloudbase Hyper-V Plugin</emphasis>.
<link xlink:href="http://www.cloudbase.it/quantum-hyper-v-plugin/"
>http://www.cloudbase.it/quantum-hyper-v-plugin/</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Linux Bridge Plugin</emphasis>.
Documentation included in this guide and at
<link
xlink:href="http://wiki.openstack.org/Quantum-Linux-Bridge-Plugin"
>http://wiki.openstack.org/Neutron-Linux-Bridge-Plugin</link>
 </para>
</listitem>
<listitem>
<para><emphasis role="bold">Mellanox Plugin</emphasis>. <link
xlink:href="https://wiki.openstack.org/wiki/Mellanox-Neutron/">
https://wiki.openstack.org/wiki/Mellanox-Neutron/</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Midonet Plugin</emphasis>.
<link
xlink:href="http://www.midokura.com/">
http://www.midokura.com/</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">NEC OpenFlow Plugin</emphasis>.
<link
xlink:href="http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin"
>http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Nicira NVP Plugin</emphasis>.
Documentation include in this guide,
<link
xlink:href="http://www.vmware.com/products/datacenter-virtualization/nicira.html">
NVP Product Overview </link>, and
<link
xlink:href="http://www.nicira.com/support"
>NVP Product Support</link>.
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Open vSwitch Plugin</emphasis>.
Documentation included in this guide.
</para>
</listitem>
<listitem>
<para><emphasis role="bold">PLUMgrid</emphasis>.
<link
xlink:href="https://wiki.openstack.org/wiki/PLUMgrid-Neutron"
>https://https://wiki.openstack.org/wiki/PLUMgrid-Neutron</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Ryu Plugin</emphasis>.
<link
xlink:href="https://github.com/osrg/ryu/wiki/OpenStack"
>https://github.com/osrg/ryu/wiki/OpenStack</link>
</para>
</listitem>
</itemizedlist>
</para>
<para>Plugins can have different properties for hardware requirements, features, performance,
scale, or operator tools. Because OpenStack Networking supports a large number of plugins,
the cloud administrator is able to weigh different options and decide which networking
technology is right for the deployment.
</para>
<?hard-pagebreak?>
<para>Not all OpenStack networking plugins are compatible with all possible OpenStack compute drivers:</para>
<table rules="all">
<caption>Plugin Compatability with OpenStack Compute Drivers</caption>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<thead>
<tr>
<th></th>
<th>Libvirt (KVM/QEMU)</th>
<th>XenServer</th>
<th>VMware</th>
<th>Hyper-V</th>
<th>Bare-metal</th>
<th>PowerVM</th>
</tr>
</thead>
<tbody>
<tr>
<td>Bigswitch / Floodlight</td>
<td>Yes</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Brocade</td>
<td>Yes</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Cisco</td>
<td>Yes</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Cloudbase Hyper-V</td>
<td></td>
<td></td>
<td></td>
<td>Yes</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Linux Bridge</td>
<td>Yes</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Mellanox</td>
<td>Yes</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Midonet</td>
<td>Yes</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>NEC OpenFlow</td>
<td>Yes</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Nicira NVP</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Open vSwitch</td>
<td>Yes</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Plumgrid</td>
<td>Yes</td>
<td></td>
<td>Yes</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Ryu</td>
<td>Yes</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
</section>
</section>
<section xml:id="Architecture">
<title>OpenStack Networking Architecture</title>
<para>This section describes the high-level components of an
OpenStack Networking deployment. Before you deploy OpenStack Networking, it is useful to
understand the different components that make up the
solution, and how these components interact with each
other and with other OpenStack services.</para>
<section xml:id="arch_overview">
<title>Overview</title>
<para>OpenStack Networking is a standalone service, just
like other OpenStack services such as OpenStack
Compute, OpenStack Image service, OpenStack Identity
service, or the OpenStack Dashboard. Like those
services, a deployment of OpenStack Networking often
involves deploying several processes on a variety of
hosts.</para>
<para>The main process of the OpenStack Networking server is
<literal>neutron-server</literal>, which is a
Python daemon that exposes the OpenStack Networking API and passes
user requests to the configured OpenStack Networking plugin for
additional processing. Typically, the plugin requires
access to a database for persistent storage (also similar
to other OpenStack services).</para>
<para>If your deployment uses a controller host to run centralized
OpenStack Compute components, you can deploy the OpenStack Networking server on
that same host. However, OpenStack Networking is entirely
standalone and can be deployed on its own host as
well. OpenStack Networking also includes additional agents that
might be required, depending on your deployment: <itemizedlist>
<listitem>
<para><emphasis role="bold">plugin agent</emphasis>
(<literal>neutron-*-agent</literal>).
Runs on each hypervisor to perform local
vswitch configuration. The agent to be run will
depend on which plugin you are using, because
some plugins do not actually require an agent.
</para>
</listitem>
<listitem>
<para><emphasis role="bold">dhcp agent</emphasis>
(<literal>neutron-dhcp-agent</literal>).
Provides DHCP services to tenant networks.
This agent is the same for all plugins.
</para>
</listitem>
<listitem>
<para><emphasis role="bold">l3
agent</emphasis>
<literal>(neutron-l3-agent</literal>).
Provides L3/NAT forwarding to provide
external network access for VMs on tenant
networks. This agent is the same for all plugins.
</para>
</listitem>
</itemizedlist>
</para>
<para>The above agents interact with the main Neutron process through RPC (for example,
rabbitmq or qpid) or through the standard OpenStack Networking API. Further:
<itemizedlist>
<listitem>
<para>OpenStack Networking relies on the OpenStack
Identity service (keystone) for the authentication and
authorization of all API request. </para>
</listitem>
<listitem>
<para>OpenStack Compute (nova) interacts with OpenStack Networking through calls
to its standard API.  As part of creating a VM, the
<systemitem class="service">nova-compute</systemitem> service communicates with the OpenStack Networking API to plug
each virtual NIC on the VM into a particular network. 
 </para>
</listitem>
<listitem><para>The OpenStack Dashboard (horizon) integrates with the OpenStack Networking
API, allowing administrators and tenant users to create and manage network services
through the Dashboard GUI.</para></listitem>
</itemizedlist>
  </para>
</section>
<section xml:id="services">
<title>Place Services on Physical Hosts</title>
<para>Like other OpenStack services, OpenStack Networking provides cloud administrators with
significant flexibility in deciding which individual services should run on
which physical devices. At one extreme, all service daemons can be run on a
single physical host for evaluation purposes. At the other, each service could
have its own physical hosts and, in some cases, be replicated across multiple hosts for
redundancy. For more information, see the chapter on <link linkend="ch_high_avail">high availability</link>.</para>
<para>In this guide, we focus primarily on a standard
architecture that includes a “cloud controller” host,
a “network gateway” host, and a set of hypervisors for
running VMs.  The "cloud controller" and "network gateway" can be combined
in simple deployments. However, if you expect VMs to send significant amounts of
traffic to or from the Internet, a dedicated network gateway host is recommended
to avoid potential CPU contention between packet forwarding performed by
the <literal>neutron-l3-agent</literal> and other OpenStack services.</para>
</section>
<section xml:id="connectivity">
<title>Network Connectivity for Physical Hosts</title>
<mediaobject>
<imageobject>
<imagedata scale="60" fileref="../common/figures/Neutron-PhysNet-Diagram.png"/>
</imageobject>
</mediaobject>
<para>A standard OpenStack Networking setup has up to four distinct physical data center networks:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Management
network</emphasis>. Used for internal
communication between OpenStack Components.  
IP addresses on this network should be
reachable only within the data center. 
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Data
network</emphasis>. Used for VM data
communication within the cloud deployment. 
The IP addressing requirements of this network
depend on the OpenStack Networking plugin being used.  </para>
</listitem>
<listitem>
<para><emphasis role="bold">External
network</emphasis>. Used to provide VMs
with Internet access in some deployment
scenarios.  IP addresses on this network
should be reachable by anyone on the
Internet.  </para>
</listitem>
<listitem>
<para><emphasis role="bold">API
network</emphasis>. Exposes all OpenStack
APIs, including the OpenStack Networking API, to
tenants. IP addresses on this network
should be reachable by anyone on the
Internet. The API network may be the same as
the external network, because it is possible to create
an external-network subnet that is allocated
IP ranges that use less than the full
range of IP addresses in an IP block.</para>
</listitem>
</itemizedlist>
</section>
</section>
<?hard-pagebreak?>
<section xml:id="use_cases">
<title>OpenStack Networking Deployment Use Cases</title>
<para>
The following common-use cases for OpenStack Networking are
not exhaustive, but can be combined to create more complex use cases.
</para>
<section xml:id="use_cases_single_flat">
<title>Use Case: Single Flat Network</title>
<para>In the simplest use case, a single OpenStack Networking network is created. This is a
"shared" network, meaning it is visible to all tenants via the OpenStack Networking
API. Tenant VMs have a single NIC, and receive
a fixed IP address from the subnet(s) associated with that network.
This use case essentially maps to the FlatManager
and FlatDHCPManager models provided by OpenStack Compute. Floating IPs are not
supported.</para>
<para>This network type is often created by the OpenStack administrator
to map directly to an existing physical network in the data center (called a
"provider network"). This allows the provider to use a physical
router on that data center network as the gateway for VMs to reach
the outside world. For each subnet on an external network, the gateway
configuration on the physical router must be manually configured
outside of OpenStack.</para>
<para>
<mediaobject>
<imageobject>
<imagedata scale="80" fileref="figures/UseCase-SingleFlat.png"/>
</imageobject>
</mediaobject>
<!--Image source link: https://docs.google.com/a/nicira.com/drawings/d/1Jb6iSoBo4G7fv7i2EMpYTMTxesLPmEPKIbI7sVbhhqY/edit -->
</para>
</section>
<?hard-pagebreak?>
<section xml:id="use_cases_multi_flat">
<title>Use Case: Multiple Flat Network</title>
<para>
This use case is similar to the above Single Flat Network use case,
except that tenants can see multiple shared networks via the OpenStack Networking API
and can choose which network (or networks) to plug into.
</para>
<para>
<mediaobject>
<imageobject>
<imagedata scale="60" fileref="figures/UseCase-MultiFlat.png"/>
</imageobject>
</mediaobject>
<!--Image source link: https://docs.google.com/a/nicira.com/drawings/d/14ayGsyunW_P-wvY8OiueE407f7540JD3VsWUH18KHvU/edit -->
</para>
</section>
<?hard-pagebreak?>
<section xml:id="use_cases_mixed">
<title>Use Case: Mixed Flat and Private Network</title>
<para>
This use case is an extension of the above Flat Network use cases.
In addition to being able to see one or more shared networks via
the OpenStack Networking API, tenants can also have access to private per-tenant
networks (only visible to tenant users).
</para>
<para>
Created VMs can have NICs on any of the shared networks and/or any of the private networks
belonging to the tenant. This enables the creation of "multi-tier"
topologies using VMs with multiple NICs. It also supports a model where
a VM acting as a gateway can provide services such as routing, NAT, or
load balancing.
</para>
<para>
<mediaobject>
<imageobject>
<imagedata scale="55" fileref="figures/UseCase-MixedFlatPrivate.png"/>
</imageobject>
</mediaobject>
<!--Image source link: https://docs.google.com/a/nicira.com/drawings/d/1efSqR6KA2gv-OKl5Rl-oV_zwgYP8mgQHFP2DsBj5Fqo/edit -->
</para>
</section>
<?hard-pagebreak?>
<section xml:id="use_cases_single_router">
<title>Use Case: Provider Router with Private Networks</title>
<para>
This use case provides each tenant with one or more private networks, which
connect to the outside world via an OpenStack Networking router.
When each tenant gets exactly one network, this architecture maps to the same
logical topology as the VlanManager in OpenStack Compute (although of course, OpenStack Networking doesn't
require VLANs). Using the OpenStack Networking API, the tenant can only see a
network for each private network assigned to that tenant. The router
object in the API is created and owned by the cloud administrator.
</para>
<para>
This model supports giving VMs public addresses using
"floating IPs", in which the router maps public addresses from the
external network to fixed IPs on private networks. Hosts without floating
IPs can still create outbound connections to the external network, because
the provider router performs SNAT to the router's external IP. The
IP address of the physical router is used as the <literal>gateway_ip</literal> of the
external network subnet, so the provider has a default router for
Internet traffic.
</para>
<para>
The router provides L3 connectivity between private networks, meaning
that different tenants can reach each other's instances unless additional
filtering is used (for example, security groups). Because there is only a single
router, tenant networks cannot use overlapping IPs. Thus, it is likely
that the administrator would create the private networks on behalf of the tenants.
</para>
<para>
<mediaobject>
<imageobject>
<imagedata scale="55" fileref="figures/UseCase-SingleRouter.png"/>
</imageobject>
</mediaobject>
<!--Image source link: https://docs.google.com/a/nicira.com/drawings/d/1DKxeZZXml_fNZHRoGPKkC7sGdkPJZCtWytYZqHIp_ZE/edit -->
</para>
</section>
<?hard-pagebreak?>
<section xml:id="use_cases_tenant_router">
<title>Use Case: Per-tenant Routers with Private Networks</title>
<para>
This use case represents a more advanced router scenario in which each tenant gets
at least one router, and potentially has access to the OpenStack Networking API to
create additional routers. The tenant can create their own networks,
potentially uplinking those networks to a router. This model enables
tenant-defined, multi-tier applications, with
each tier being a separate network behind the router. Since there are
multiple routers, tenant subnets can overlap without conflicting,
since access to external networks all happens via SNAT or Floating IPs.
Each router uplink and floating IP is allocated from the external network
subnet.
</para>
<para>
<mediaobject>
<imageobject>
<imagedata scale="55"
fileref="figures/UseCase-MultiRouter.png" align="left"/>
</imageobject>
</mediaobject>
<!--Image source link: https://docs.google.com/a/nicira.com/drawings/d/1mmQc8cBUoTEfEns-ehIyQSTvOrjUdl5xeGDv9suVyAY/edit -->
</para>
</section>
</section>
</chapter>

View File

@ -1,50 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<preface xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_preface">
<title>Preface</title>
<para>OpenStack Networking was created to provide a rich
API for defining network connectivity and addressing in the cloud.
The service, code-named "Neutron" (formerly known as "Quantum"), gives
operators the ability to leverage different networking technologies to
power their cloud networking.</para>
<para>The Board of Directors and Technical Committee members involved in
networking-related development and documentation have decided to change
the code name to "Neutron", as part of a legal agreement with Quantum
Corporation (the owner of the "Quantum" trademark).</para>
<para>Any references to the previous code name have been removed in this guide wherever
possible; all configuration files have been changed in the Havana release and this guide
updated respectively.</para>
<section xml:id="Intended_Audience-d1e85">
<title>Intended Audience</title>
<para>This guide assists OpenStack administrators in
leveraging different networking technologies to power
their cloud networking. This document covers how to
install, configure, and run OpenStack Networking.</para>
<para>You must have access to a plugin providing
the implementation of the OpenStack Networking service. Plugins are
distributed both within and externally to the OpenStack Networking distribution.
For more information, see the <link linkend="flexibility">plugin listing</link>.
</para>
<para>You should also be familiar with running the OpenStack
Compute service as described in the operator's documentation.
</para>
</section>
<xi:include href="../common/section_dochistory.xml" />
<section xml:id="resources">
<title>Resources</title>
<para>For more information on OpenStack Networking and other network-related projects,
see the project page on the OpenStack wiki (<link
xlink:href="https://wiki.openstack.org/wiki/Neutron"
>wiki.openstack.org/Neutron</link>).</para>
<para>For information about programming against the OpenStack
Networking API, see the <link
xlink:href="http://docs.openstack.org/api/openstack-network/2.0/content/"
><citetitle>OpenStack Networking API Guide
(v2.0)</citetitle></link>.</para>
<para>We welcome feedback, comments, and bug reports at <link
xlink:href="https://bugs.launchpad.net/neutron">bugs.launchpad.net/Neutron</link>.</para>
</section>
<?hard-pagebreak?>
</preface>

View File

@ -1,591 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_using">
<title>Using OpenStack Networking</title>
<para>You can use OpenStack Networking in the following ways:
<itemizedlist>
<listitem><para>Expose the OpenStack Networking API to cloud tenants,
which enables them to build rich network topologies.
</para>
</listitem>
<listitem><para>Have the cloud administrator, or an automated
administrative tool, create network connectivity on behalf of tenants.
</para>
</listitem>
</itemizedlist>
</para>
<para>
A tenant or cloud administrator can both perform the following procedures.
</para>
<section xml:id="api_features">
<title>Core OpenStack Networking API features</title>
<para>After you install and run OpenStack Networking, tenants
and administrators can perform create-read-update-delete (CRUD) API
networking operations by using either the
<command>neutron</command> CLI tool or the API.
Like other OpenStack CLI tools, the <command>neutron</command>
tool is just a basic wrapper around the OpenStack Networking API. Any
operation that can be performed using the CLI has an equivalent API call
that can be performed programmatically.
</para>
<para>The CLI includes a number of options. For details, refer to the
<citetitle>OpenStack End User Guide</citetitle>.
</para>
<section xml:id="api_abstractions">
<title>API Abstractions</title>
<para>The OpenStack Networking v2.0 API provides control over both
L2 network topologies and the IP addresses used on those networks
(IP Address Management or IPAM). There is also an extension to
cover basic L3 forwarding and NAT, which provides capabilities
similar to <command>nova-network</command>.
</para>
<para>In the OpenStack Networking API:
<itemizedlist>
<listitem><para>A 'Network' is an isolated L2 network segment
(similar to a VLAN), which forms the basis for
describing the L2 network topology available in an OpenStack
Networking deployment.
</para></listitem>
<listitem><para>A 'Subnet' associates a block of IP addresses
and other network configuration (for example, default gateways
or dns-servers) with an OpenStack Networking network. Each
subnet represents an IPv4 or IPv6 address block and, if needed,
each OpenStack Networking network can have multiple subnets.
</para></listitem>
<listitem><para>A 'Port' represents an attachment port to a L2
OpenStack Networking network. When a port
is created on the network, by default it is allocated an
available fixed IP address out of one of the designated subnets
for each IP version (if one exists). When the port is destroyed,
its allocated addresses return to the pool of available IPs on
the subnet. Users of the OpenStack Networking API can either
choose a specific IP address from the block, or let OpenStack
Networking choose the first available IP address.
</para></listitem>
</itemizedlist>
</para>
<para>The following table summarizes the attributes available for each
of the previous networking abstractions. For more operations about
API abstraction and operations, please refer to the
<link xlink:href="http://docs.openstack.org/api/openstack-network/2.0/content/">Networking API v2.0 Reference</link>.
</para>
<table rules="all">
<caption>Network Attributes</caption>
<col width="20%"/>
<col width="15%"/>
<col width="17%"/>
<col width="47%"/>
<thead>
<tr>
<th>Attribute</th>
<th>Type</th>
<th>Default value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><systemitem>admin_state_up</systemitem></td>
<td>bool</td>
<td>True</td>
<td>Administrative state of the network. If specified as
False (down), this network does not forward
packets.
</td>
</tr>
<tr>
<td><systemitem>id</systemitem></td>
<td>uuid-str</td>
<td>Generated</td>
<td>UUID for this network.</td>
</tr>
<tr>
<td><systemitem>name</systemitem></td>
<td>string</td>
<td>None</td>
<td>Human-readable name for this network; is not required
to be unique.
</td>
</tr>
<tr>
<td><systemitem>shared</systemitem></td>
<td>bool</td>
<td>False</td>
<td>Specifies whether this network resource can
be accessed by any tenant. The default policy setting restricts
usage of this attribute to administrative users only.
</td>
</tr>
<tr>
<td><systemitem>status</systemitem></td>
<td>string</td>
<td>N/A</td>
<td>Indicates whether this network is
currently operational.</td>
</tr>
<tr>
<td><systemitem>subnets</systemitem></td>
<td>list(uuid-str)</td>
<td>Empty list</td>
<td>List of subnets associated with this network.
</td>
</tr>
<tr>
<td><systemitem>tenant_id</systemitem></td>
<td>uuid-str</td>
<td>N/A</td>
<td>Tenant owner of the network. Only administrative users
can set the tenant identifier; this cannot be changed
using authorization policies.
</td>
</tr>
</tbody>
</table>
<table rules="all">
<caption>Subnet Attributes</caption>
<col width="20%"/>
<col width="15%"/>
<col width="17%"/>
<col width="47%"/>
<thead>
<tr>
<th>Attribute</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><systemitem>allocation_pools</systemitem></td>
<td>list(dict)</td>
<td>Every address in <systemitem>cidr</systemitem>,
excluding <systemitem>gateway_ip</systemitem> (if
configured).
</td>
<td><para>List of cidr sub-ranges that are available for dynamic
allocation to ports. Syntax:
<programlisting>[ { "start":"10.0.0.2",
"end": "10.0.0.254"} ]</programlisting></para>
</td>
</tr>
<tr>
<td><systemitem>cidr</systemitem></td>
<td>string</td>
<td>N/A</td>
<td>IP range for this subnet, based on the IP version.</td>
</tr>
<tr>
<td><systemitem>dns_nameservers</systemitem></td>
<td>list(string)</td>
<td>Empty list</td>
<td>List of DNS name servers used by hosts in this subnet.</td>
</tr>
<tr>
<td><systemitem>enable_dhcp</systemitem></td>
<td>bool</td>
<td>True</td>
<td>Specifies whether DHCP is enabled for this subnet.</td>
</tr>
<tr>
<td><systemitem>gateway_ip</systemitem></td>
<td>string</td>
<td>First address in <systemitem>cidr</systemitem>
</td>
<td>Default gateway used by devices in this subnet.</td>
</tr>
<tr>
<td><systemitem>host_routes</systemitem></td>
<td>list(dict)</td>
<td>Empty list</td>
<td>Routes that should be used by devices with
IPs from this subnet (not including local
subnet route).</td>
</tr>
<tr>
<td><systemitem>id</systemitem></td>
<td>uuid-string</td>
<td>Generated</td>
<td>UUID representing this subnet.</td>
</tr>
<tr>
<td><systemitem>ip_version</systemitem></td>
<td>int</td>
<td>4</td>
<td>IP version.</td>
</tr>
<tr>
<td><systemitem>name</systemitem></td>
<td>string</td>
<td>None</td>
<td>Human-readable name for this subnet (might
not be unique).
</td>
</tr>
<tr>
<td><systemitem>network_id</systemitem></td>
<td>uuid-string</td>
<td>N/A</td>
<td>Network with which this subnet is associated.</td>
</tr>
<tr>
<td><systemitem>tenant_id</systemitem></td>
<td>uuid-string</td>
<td>N/A</td>
<td>Owner of network. Only administrative users
can set the tenant identifier; this cannot be changed
using authorization policies.
</td>
</tr>
</tbody>
</table>
<table rules="all">
<caption>Port Attributes</caption>
<col width="20%"/>
<col width="15%"/>
<col width="17%"/>
<col width="47%"/>
<thead>
<tr>
<th>Attribute</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><systemitem>admin_state_up</systemitem></td>
<td>bool</td>
<td>true</td>
<td>Administrative state of this port. If specified as False
(down), this port does not forward packets.
</td>
</tr>
<tr>
<td><systemitem>device_id</systemitem></td>
<td>string</td>
<td>None</td>
<td>Identifies the device using this port (for example, a
virtual server's ID).
</td>
</tr>
<tr>
<td><systemitem>device_owner</systemitem></td>
<td>string</td>
<td>None</td>
<td>Identifies the entity using this port (for example, a
dhcp agent).</td>
</tr>
<tr>
<td><systemitem>fixed_ips</systemitem></td>
<td>list(dict)</td>
<td>Automatically allocated from pool</td>
<td>Specifies IP addresses for this port; associates
the port with the subnets containing the listed IP
addresses.
</td>
</tr>
<tr>
<td><systemitem>id</systemitem></td>
<td>uuid-string</td>
<td>Generated</td>
<td>UUID for this port.</td>
</tr>
<tr>
<td><systemitem>mac_address</systemitem></td>
<td>string</td>
<td>Generated</td>
<td>Mac address to use on this port.</td>
</tr>
<tr>
<td><systemitem>name</systemitem></td>
<td>string</td>
<td>None</td>
<td>Human-readable name for this port (might
not be unique).
</td>
</tr>
<tr>
<td><systemitem>network_id</systemitem></td>
<td>uuid-string</td>
<td>N/A</td>
<td>Network with which this port is associated.
</td>
</tr>
<tr>
<td><systemitem>status</systemitem></td>
<td>string</td>
<td>N/A</td>
<td>Indicates whether the network is currently
operational.
</td>
</tr>
<tr>
<td><systemitem>tenant_id</systemitem></td>
<td>uuid-string</td>
<td>N/A</td>
<td>Owner of the network. Only administrative users
can set the tenant identifier; this cannot be changed
using authorization policies.
</td>
</tr>
</tbody>
</table>
</section>
<section xml:id="basic_operations">
<title>Basic operations</title>
<para>Before going further, it is highly recommended that you first
read the few pages in the <link xlink:href="http://docs.openstack.org/user-guide/content/index.html">
OpenStack End User Guide</link> that are specific to OpenStack
Networking. OpenStack Networking's CLI has some advanced
capabilities that are described only in that guide.
</para>
<para>The following table provides just a few examples of the
<systemitem>neutron</systemitem> tool usage.
</para>
<table rules="all">
<caption>Basic OpenStack Networking operations</caption>
<col width="50%"/>
<col width="50%"/>
<thead>
<tr>
<th>Action</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>Create a network.</td>
<td><screen><prompt>$</prompt> <userinput>neutron net-create net1</userinput></screen></td>
</tr>
<tr>
<td>Create a subnet associated with net1.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create net1 10.0.0.0/24</userinput></screen></td>
</tr>
<tr>
<td>List ports on a tenant.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-list</userinput></screen></td>
</tr>
<tr>
<td>List ports on a tenant, and display the <systemitem>id</systemitem>, <systemitem>fixed_ips</systemitem>, and
<systemitem>device_owner</systemitem> columns.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-list -c id -c fixed_ips -c device_owner</userinput></screen>
</td>
</tr>
<tr>
<td>Display details of a particular port.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-show <replaceable>port-id</replaceable></userinput></screen></td>
</tr>
</tbody>
</table>
<note>
<para>
The <systemitem>device_owner</systemitem> field describes who owns the
port. A port whose <systemitem>device_owner</systemitem> begins with:
<itemizedlist>
<listitem><para>"network:" is created by OpenStack
Networking.</para></listitem>
<listitem><para>"compute:" is created by OpenStack Compute.
</para></listitem>
</itemizedlist>
</para>
</note>
</section>
<section xml:id="admin_api_config">
<title>Administrative operations</title>
<para>The cloud administrator can perform any <systemitem>neutron</systemitem>
call on behalf of tenants by specifying an OpenStack Identity <systemitem>tenant_id</systemitem> in the request, as follows:
</para>
<screen><prompt>$</prompt> <userinput>neutron net-create --tenant-id=<replaceable>tenant-id</replaceable> <replaceable>network-name</replaceable></userinput></screen>
<para>
For example:
</para>
<screen><prompt>$</prompt> <userinput>neutron net-create --tenant-id=5e4bbe24b67a4410bc4d9fae29ec394e net1</userinput></screen>
<note><para>To view all tenant IDs in OpenStack Identity, run the
following command as an OpenStack Identity (keystone) admin user:
</para>
<screen><prompt>$</prompt> <userinput>keystone tenant-list</userinput></screen>
</note>
</section>
<section xml:id="advanced_networking">
<title>Advanced operations</title>
<para>The following table provides a few advanced examples of using the
<systemitem>neutron</systemitem> tool to create and display
networks, subnets, and ports.</para>
<table rules="all">
<caption>Advanced OpenStack Networking operations</caption>
<col width="25%"/>
<col width="75%"/>
<thead>
<tr>
<th>Action</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>Create a "shared" network (that is, a network that can be used by all tenants).</td>
<td><screen><prompt>$</prompt> <userinput>neutron net-create --shared public-net</userinput></screen></td>
</tr>
<tr>
<td>Create a subnet that has a specific gateway IP address.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create --gateway 10.0.0.254 net1 10.0.0.0/24</userinput></screen></td>
</tr>
<tr>
<td>Create a subnet that has no gateway IP address.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create --no-gateway net1 10.0.0.0/24</userinput></screen></td>
</tr>
<tr>
<td>Create a subnet in which DHCP is disabled.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create net1 10.0.0.0/24 --enable_dhcp False</userinput></screen></td>
</tr>
<tr>
<td>Create subnet with a specific set of host routes.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create test-net1 40.0.0.0/24 --host_routes type=dict list=true destination=40.0.1.0/24,nexthop=40.0.0.2</userinput></screen></td>
</tr>
<tr>
<td>Create subnet with a specific set of dns nameservers.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create test-net1 40.0.0.0/24 --dns_nameservers list=true 8.8.8.7 8.8.8.8</userinput></screen></td>
</tr>
<tr>
<td>Display all ports/IPs allocated on a network.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-list --network_id <replaceable>net-id</replaceable></userinput></screen></td>
</tr>
</tbody>
</table>
</section>
</section>
<section xml:id="using_nova_with_neutron">
<title>Using OpenStack Compute with OpenStack Networking</title>
<section xml:id="basic_workflow_with_nova">
<title>Basic Operations</title>
<table rules="all">
<caption>Basic Compute/Networking operations</caption>
<col width="40%"/>
<col width="60%"/>
<thead>
<tr>
<th>Action</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>Check available networks.</td>
<td><screen><prompt>$</prompt> <userinput>neutron net-list</userinput></screen></td>
</tr>
<tr>
<td>Boot a VM with a single NIC on a selected OpenStack Networking network.</td>
<td><screen><prompt>$</prompt> <userinput>nova boot --image <replaceable>img</replaceable> --flavor <replaceable>flavor</replaceable> --nic net-id=<replaceable>net-id</replaceable> <replaceable>vm-name</replaceable></userinput></screen>
</td>
</tr>
<tr>
<td>Search for all ports with a <systemitem>device_id</systemitem> corresponding to the OpenStack Compute instance UUID.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-list --device_id=<replaceable>vm-id</replaceable></userinput></screen></td>
</tr>
<tr>
<td>Search for ports, but limit display to only the port's <systemitem>mac_address</systemitem>.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-list -c mac_address --device_id=<replaceable>vm-id</replaceable></userinput></screen></td>
</tr>
<tr>
<td>Temporarily disable a port from sending traffic.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-update <replaceable>port-id</replaceable> --admin_state_up=False</userinput></screen></td>
</tr>
<tr>
<td>Delete a VM.</td>
<td><screen><prompt>$</prompt> <userinput>nova delete --device_id=<replaceable>vm-id</replaceable></userinput></screen></td>
</tr>
</tbody>
</table>
<note><para>When you:
<itemizedlist>
<listitem><para>Boot a Compute VM, a port on the network is
automatically created that corresponds to the VM Nic. You may
also need to configure <link linkend="enabling_ping_and_ssh">security group rules</link> to allow access to the VM.</para></listitem>
<listitem><para>Delete a Compute VM, the underlying OpenStack
Networking port is automatically deleted as well.</para></listitem>
</itemizedlist>
</para></note>
</section>
<section xml:id="advanceed_vm_creation">
<title>Advanced VM creation</title>
<table rules="all">
<caption>VM creation operations</caption>
<col width="40%"/>
<col width="60%"/>
<thead>
<tr>
<th>Action</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>Boot a VM with multiple NICs.</td>
<td><screen><prompt>$</prompt> <userinput>nova boot --image <replaceable>img</replaceable> --flavor <replaceable>flavor</replaceable> --nic net-id=<replaceable>net1-id</replaceable> --nic net-id=<replaceable>net2-id</replaceable> <replaceable>vm-name</replaceable></userinput></screen></td>
</tr>
<tr>
<td>Boot a VM with a specific IP address: first create an OpenStack
Networking port with a specific IP address, then boot
a VM specifying a <systemitem>port-id</systemitem> rather than a
<systemitem>net-id</systemitem>.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-create --fixed-ip subnet_id=<replaceable>subnet-id</replaceable>,ip_address=<replaceable>IP</replaceable> <replaceable>net-id</replaceable>
<prompt>$</prompt> nova boot --image <replaceable>img</replaceable> --flavor <replaceable>flavor</replaceable> --nic port-id=<replaceable>port-id</replaceable> <replaceable>vm-name</replaceable>
</userinput></screen></td>
</tr>
<tr>
<td>Boot a VM that connects to all networks that are accessible to
the tenant who submits the request (without the
<systemitem>--nic</systemitem> option).
</td>
<td><screen><prompt>$</prompt> <userinput>nova boot --image <replaceable>img</replaceable> --flavor <replaceable>flavor</replaceable> <replaceable>vm-name</replaceable></userinput></screen></td>
</tr>
</tbody>
</table>
<note><para>OpenStack Networking does not currently support the <command>v4-fixed-ip</command> parameter of the <command>--nic</command> option for the <command>nova</command> command.
</para></note>
</section>
<section xml:id="enabling_ping_and_ssh">
<title>Security Groups (Enabling Ping and SSH on VMs)</title>
<para>You must configure security group rules depending on the type of
plugin you are using. If you are using a plugin that:
</para>
<itemizedlist>
<listitem><para>Implements OpenStack Networking security groups, you can
configure security group rules directly by using
<command>neutron security-group-rule-create</command>. The following example
allows <command>ping</command> and <command>ssh</command> access to
your VMs.
</para>
<screen><prompt>$</prompt> <userinput>neutron security-group-rule-create --protocol icmp --direction ingress default</userinput>
<prompt>$</prompt> <userinput>neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --direction ingress default</userinput></screen>
</listitem>
<listitem>
<para>Does not implement OpenStack Networking security groups, you can
configure security group rules by using the
<command>nova secgroup-add-rule</command> or
<command>euca-authorize</command> command. The following
<systemitem>nova</systemitem> commands allow
<command>ping</command> and <command>ssh</command> access to your VMs.
</para>
<screen><prompt>$</prompt> <userinput>nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0</userinput>
<prompt>$</prompt> <userinput>nova secgroup-add-rule default tcp 22 22 0.0.0.0/0</userinput></screen>
</listitem>
</itemizedlist>
<note>
<para>If your plugin implements OpenStack Networking security groups,
you can also leverage Compute security groups by setting
<systemitem>security_group_api = neutron</systemitem> in
<filename>nova.conf</filename>. After setting this option, all Compute
security group commands are proxied to OpenStack Networking.
</para>
</note>
</section>
</section>
</chapter>

View File

Before

Width:  |  Height:  |  Size: 39 KiB

After

Width:  |  Height:  |  Size: 39 KiB

View File

Before

Width:  |  Height:  |  Size: 31 KiB

After

Width:  |  Height:  |  Size: 31 KiB

View File

Before

Width:  |  Height:  |  Size: 56 KiB

After

Width:  |  Height:  |  Size: 56 KiB

View File

Before

Width:  |  Height:  |  Size: 19 KiB

After

Width:  |  Height:  |  Size: 19 KiB

View File

Before

Width:  |  Height:  |  Size: 43 KiB

After

Width:  |  Height:  |  Size: 43 KiB

View File

Before

Width:  |  Height:  |  Size: 39 KiB

After

Width:  |  Height:  |  Size: 39 KiB

View File

Before

Width:  |  Height:  |  Size: 25 KiB

After

Width:  |  Height:  |  Size: 25 KiB

View File

Before

Width:  |  Height:  |  Size: 32 KiB

After

Width:  |  Height:  |  Size: 32 KiB

View File

Before

Width:  |  Height:  |  Size: 64 KiB

After

Width:  |  Height:  |  Size: 64 KiB

View File

Before

Width:  |  Height:  |  Size: 198 KiB

After

Width:  |  Height:  |  Size: 198 KiB

View File

Before

Width:  |  Height:  |  Size: 83 KiB

After

Width:  |  Height:  |  Size: 83 KiB

View File

Before

Width:  |  Height:  |  Size: 162 KiB

After

Width:  |  Height:  |  Size: 162 KiB

View File

Before

Width:  |  Height:  |  Size: 47 KiB

After

Width:  |  Height:  |  Size: 47 KiB

View File

Before

Width:  |  Height:  |  Size: 200 KiB

After

Width:  |  Height:  |  Size: 200 KiB

View File

Before

Width:  |  Height:  |  Size: 114 KiB

After

Width:  |  Height:  |  Size: 114 KiB

View File

Before

Width:  |  Height:  |  Size: 170 KiB

After

Width:  |  Height:  |  Size: 170 KiB

View File

Before

Width:  |  Height:  |  Size: 49 KiB

After

Width:  |  Height:  |  Size: 49 KiB

View File

@ -1,193 +1,25 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_adv_operational_features">
<title>Advanced Operational Features</title>
<section xml:id="ch_adv_logging">
<title>Logging Configuration</title>
<para>OpenStack Networking components use a Python logging module for logging. You can
configure logging using any of the following:</para>
<para>
<itemizedlist>
<listitem>
<para>Update settings in the <filename>/etc/neutron/neutron.conf</filename> file. For
example:</para>
<programlisting language="ini">[DEFAULT]
# Default log level is INFO
# verbose and debug has the same result.
# One of them will set DEBUG log level output
debug = False
verbose = True
# Where to store Neutron state files. This directory must be writable by the
# user executing the agent.
# state_path = /var/lib/neutron
# Where to store lock files
lock_path = $state_path/lock
# log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s
# log_date_format = %Y-%m-%d %H:%M:%S
# use_syslog -> syslog
# log_file and log_dir -> log_dir/log_file
# (not log_file) and log_dir -> log_dir/{binary_name}.log
# use_stderr -> stderr
# (not user_stderr) and (not log_file) -> stdout
# publish_errors -> notification system
# use_syslog = False
# syslog_log_facility = LOG_USER
# use_stderr = True
# log_file =
log_dir =/var/log/neutron</programlisting>
</listitem>
<listitem>
<para>Use command-line options. For example, use the <systemitem>--debug</systemitem>
option when using the <systemitem>neutron</systemitem> command-line tool (see the
<citetitle>OpenStack End User Guide</citetitle> for more information).</para>
<para>Command-line options override options specified in
<filename>neutron.conf</filename>.</para>
</listitem>
<listitem>
<para>Use a python logging configuration file (see <link
xlink:href="http://docs.python.org/howto/logging.html">Python Logging HOWTO</link> for
more information).</para>
</listitem>
</itemizedlist>
</para>
</section>
<section xml:id="ch_adv_notification">
<title>Notifications</title>
<section xml:id="ch_adv_notification_overview">
<title>Notification Options</title>
<para>You can send notifications when creating, updating, or deleting OpenStack Networking
resources. To support a DHCP agent, you must set up an <literal>rpc_notifier</literal>
driver.</para>
<para>To configure notifications, update settings in the
<filename>/etc/neutron/neutron.conf</filename> file. For example:</para>
<programlisting language="ini">============ Notification System Options =====================
# Notifications can be sent when network/subnet/port are create, updated or deleted.
# There are three methods of sending notifications: logging (via the
# log_file directive), rpc (via a message queue) and
# noop (no notifications sent, the default)
# Notification_driver can be defined multiple times
# Do nothing driver
# notification_driver = neutron.openstack.common.notifier.no_op_notifier
# Logging driver
# notification_driver = neutron.openstack.common.notifier.log_notifier
# RPC driver
notification_driver = neutron.openstack.common.notifier.rpc_notifier
# default_notification_level is used to form actual topic names or to set logging level
# default_notification_level = INFO
# default_publisher_id is a part of the notification payload
# host = myhost.com
# default_publisher_id = $host
# Defined in rpc_notifier for rpc way, can be comma separated values.
# The actual topic names will be %s.%(default_notification_level)s
notification_topics = notifications</programlisting>
</section>
<section xml:id="ch_adv_notification_cases">
<title>Notification Use Cases</title>
<section xml:id="ch_adv_notification_cases_log_rpc">
<title>Logging and RPC</title>
<para>To make the OpenStack Networking server send notifications using logging and RPC, use
the following configuration in the <filename>neutron.conf</filename> file.</para>
<para>RPC notifications are sent to the <systemitem>notifications.info</systemitem> queue
binded to a topic exchange defined by <systemitem>control_exchange</systemitem> (also
defined in <systemitem>neutron.conf</systemitem>). Logging options are described in <link
linkend="ch_adv_logging">Logging Settings</link>.</para>
<programlisting language="ini"># ============ Notification System Options =====================
# Notifications can be sent when network/subnet/port are create, updated or deleted.
# There are three methods of sending notifications: logging (via the
# log_file directive), rpc (via a message queue) and
# noop (no notifications sent, the default)
# Notification_driver can be defined multiple times
# Do nothing driver
# notification_driver = neutron.openstack.common.notifier.no_op_notifier
# Logging driver
notification_driver = neutron.openstack.common.notifier.log_notifier
# RPC driver
notification_driver = neutron.openstack.common.notifier.rpc_notifier
# default_notification_level is used to form actual topic names or to set logging level
default_notification_level = INFO
# default_publisher_id is a part of the notification payload
# host = myhost.com
# default_publisher_id = $host
# Defined in rpc_notifier for rpc way, can be comma separated values.
# The actual topic names will be %s.%(default_notification_level)s
notification_topics = notifications</programlisting>
</section>
<section xml:id="ch_adv_notification_cases_multi_rpc_topics">
<title>Multiple RPC Topics</title>
<para>To make the OpenStack Networking server send notifications to multiple RPC topics, use
the following configuration in the <filename>neutron.conf</filename> file.</para>
<para>RPC notifications are sent to the <systemitem>notifications_one.info</systemitem> and
<systemitem>notifications_two.info</systemitem> queues, that are binded to a topic
exchange defined by <systemitem>control_exchange</systemitem> (also defined in
<systemitem>neutron.conf</systemitem>). Logging options are described in <link
linkend="ch_adv_logging">Logging Settings</link>.</para>
<programlisting language="ini"># ============ Notification System Options =====================
# Notifications can be sent when network/subnet/port are create, updated or deleted.
# There are three methods of sending notifications: logging (via the
# log_file directive), rpc (via a message queue) and
# noop (no notifications sent, the default)
# Notification_driver can be defined multiple times
# Do nothing driver
# notification_driver = neutron.openstack.common.notifier.no_op_notifier
# Logging driver
# notification_driver = neutron.openstack.common.notifier.log_notifier
# RPC driver
notification_driver = neutron.openstack.common.notifier.rabbit_notifier
# default_notification_level is used to form actual topic names or to set logging level
default_notification_level = INFO
# default_publisher_id is a part of the notification payload
# host = myhost.com
# default_publisher_id = $host
# Defined in rpc_notifier for rpc way, can be comma separated values.
# The actual topic names will be %s.%(default_notification_level)s
notification_topics = notifications_one,notifications_two</programlisting>
</section>
</section>
</section>
<section xml:id="ch_adv_quotas">
<section xml:id="section_networking-advanced-quotas"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Quotas</title>
<para>Quota is a function to limit number of resources. You can enforce default quota for all
tenants. You will get error when you try to create more resources than the limit.</para>
<screen><prompt>$</prompt> <userinput>neutron net-create test_net
Quota exceeded for resources: ['network']</userinput></screen>
<para>
Per-tenant quota configuration is also supported by quota
extension API. See <link linkend="cfg_quotas_per_tenant">
Per-tenant quota configuration</link> for details.
</para>
<para>Quota is a function used to limit the number of resources. A default quota may be
enforced for all tenants. Attempting to create resources over the limit triggers an
error.</para>
<screen><prompt>$</prompt> <userinput>neutron net-create test_net</userinput>
<computeroutput>Quota exceeded for resources: ['network']</computeroutput></screen>
<para>Per-tenant quota configuration is also supported by the quota extension API. See <link
linkend="cfg_quotas_per_tenant"> Per-tenant quota configuration</link> for details. </para>
<section xml:id="cfg_quotas_common">
<title>Basic quota configuration</title>
<para>In OpenStack Networking default quota mechanism, all
tenants have a same quota value, i.e., a number of resources
that a tenant can create. This is enabled by default.</para>
<para>The value of quota is defined in the OpenStack Networking configuration file
(<filename>neutron.conf</filename>). If you want to disable quotas for a specific resource
(e.g., network, subnet, port), remove a corresponding item from
<literal>quota_items</literal>. Each of the quota values in the example below is the
default value.</para>
<para>In the Networking default quota mechanism, all tenants have the same quota value, such
as the number of resources that a tenant can create. This is enabled by default.</para>
<para>The quota value is defined in the OpenStack Networking configuration file
(<filename>neutron.conf</filename>). If you want to disable quotas for a specific resource
(e.g., network, subnet, port), remove a corresponding item from
<literal>quota_items</literal>. Each of the quota values in the example below is the default
value.</para>
<programlisting language="ini">[quotas]
# resource name(s) that are supported in quota features
quota_items = network,subnet,port
@ -393,5 +225,4 @@ quota_security_group_rule = 100</programlisting>
| subnet | 10 |
+------------+-------+</computeroutput></screen>
</section>
</section>
</chapter>
</section>

View File

@ -8,5 +8,12 @@
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
xmlns:ns="http://docbook.org/ns/docbook">
<title>OpenStack Networking</title>
<xi:include href="networking/section_networking-options-reference.xml"/>
<para>This chapter explains the configuration options and scenarios for OpenStack Networking.
For installation prerequisites, steps, and use cases, refer to corresponding chapter in the
<emphasis role="italic">OpenStack Installation Guide</emphasis>.</para>
<xi:include href="networking/section_networking-options-reference.xml"/>
<xi:include href="networking/section_networking-config-identity.xml"/>
<xi:include href="networking/section_networking-scenarios.xml"/>
<xi:include href="networking/section_networking-adv-config.xml"/>
<xi:include href="networking/section_networking-multi-dhcp-agents.xml"/>
</chapter>

View File

@ -1,11 +1,13 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="ch_adv_config">
<section xml:id="section_networking-advanced-config"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Advanced Configuration Options</title>
<para>This section describes advanced configurations options for various system components (i.e.
config options where the default is usually ok, but that the user may want to tweak). After
installing from packages, $NEUTRON_CONF_DIR is <filename>/etc/neutron</filename>.</para>
<section xml:id="neutron_server">
<section xml:id="section_neutron_server">
<title>OpenStack Networking Server with Plugin</title>
<para>This is the web server that runs the OpenStack Networking API Web Server. It is
responsible for loading a plugin and passing the API calls to the plugin for processing.
@ -103,7 +105,7 @@ mysql&gt; grant all on &lt;database-name&gt;.* to '&lt;user-name&gt;'@'%';</comp
<para>All of the plugin configuration files options can be found in the Appendix -
Configuration File Options.</para>
</section>
<section xml:id="adv_cfg_dhcp_agent">
<section xml:id="section_adv_cfg_dhcp_agent">
<title>DHCP Agent</title>
<para>There is an option to run a DHCP server that will allocate IP addresses to virtual
machines running on the network. When a subnet is created, by default, the
@ -143,13 +145,11 @@ mysql&gt; grant all on &lt;database-name&gt;.* to '&lt;user-name&gt;'@'%';</comp
</tr>
</tbody>
</table></para>
<para>All of the DHCP agent configuration options can be found in the <link
linkend="dhcp_agent_ini"> Appendix - Configuration File Options</link>.</para>
<section xml:id="adv_cfg_dhcp_agent_namespace">
<title>Namespace</title>
<para>By default the DHCP agent makes use of Linux network namespaces in order to
support overlapping IP addresses. Requirements for network namespaces support are
described in the <link linkend="ch_limitations">Limitation</link> section.</para>
described in the <link linkend="section_limitations">Limitations</link> section.</para>
<para>
<emphasis role="bold">If the Linux installation does not support network namespace,
you must disable using network namespace in the DHCP agent config
@ -157,7 +157,7 @@ mysql&gt; grant all on &lt;database-name&gt;.* to '&lt;user-name&gt;'@'%';</comp
<screen><computeroutput>use_namespaces = False</computeroutput></screen>
</section>
</section>
<section xml:id="adv_cfg_l3_agent">
<section xml:id="section_adv_cfg_l3_agent">
<title>L3 Agent</title>
<para>There is an option to run a L3 agent that will give enable layer 3 forwarding and
floating IP support. The node that runs the L3 agent should run:</para>
@ -219,13 +219,11 @@ admin_password $SERVICE_PASSWORD</computeroutput></screen>
</listitem>
</orderedlist>
</para>
<para>All of the L3 agent configuration options can be found in the <link
linkend="l3_agent"> Appendix - Configuration File Options</link>.</para>
<section xml:id="adv_cfg_l3_agent_namespace">
<title>Namespace</title>
<para>By default the L3 agent makes use of Linux network namespaces in order to support
overlapping IP addresses. Requirements for network namespaces support are described
in the <link linkend="ch_limitations">Limitation</link> section.</para>
in the <link linkend="section_limitations">Limitation</link> section.</para>
<para>
<emphasis role="bold">If the Linux installation does not support network namespace,
you must disable using network namespace in the L3 agent config file</emphasis>
@ -290,4 +288,93 @@ gateway_external_network_id = e828e54c-850a-4e74-80a8-8b79c6a285d8
external_network_bridge = br-ex-2</computeroutput></screen>
</section>
</section>
</chapter>
<section xml:id="section_limitations">
<title>Limitations</title>
<para>
<itemizedlist>
<listitem>
<para><emphasis>No equivalent for nova-network
--multi_host flag:</emphasis> Nova-network has
a model where the L3, NAT, and DHCP processing
happen on the compute node itself, rather than a
dedicated networking node. OpenStack Networking
now support running multiple l3-agent and dhcp-agents
with load being split across those agents, but the
tight coupling of that scheduling with the location of
the VM is not supported in Grizzly. The Havana release is expected
to include an exact replacement for the --multi_host flag
in nova-network.</para>
</listitem>
<listitem>
<para><emphasis>Linux network namespace required on nodes running <systemitem class="
service">neutron-l3-agent</systemitem>
or <systemitem class="
service">neutron-dhcp-agent</systemitem> if overlapping IPs are in use: </emphasis>. In order
to support overlapping IP addresses, the OpenStack Networking DHCP and L3 agents
use Linux network namespaces by default. The hosts running these processes must
support network namespaces. To support network namespaces, the following are
required:</para>
<itemizedlist>
<listitem>
<para>Linux kernel 2.6.24 or newer (with CONFIG_NET_NS=y in kernel
configuration) and</para>
</listitem>
<listitem>
<para>iproute2 utilities ('ip' command) version 3.1.0 (aka 20111117) or
newer</para>
</listitem>
</itemizedlist>
<para>To check whether your host supports namespaces try running the following as
root:</para>
<screen><prompt>#</prompt> <userinput>ip netns add test-ns</userinput>
<prompt>#</prompt> <userinput>ip netns exec test-ns ifconfig</userinput></screen>
<para>If the preceding commands do not produce errors, your platform is likely
sufficient to use the dhcp-agent or l3-agent with namespace. In our experience,
Ubuntu 12.04 or later support namespaces as does Fedora 17 and new, but some
older RHEL platforms do not by default. It may be possible to upgrade the
iproute2 package on a platform that does not support namespaces by default.</para>
<para>If you need to disable namespaces, make sure the
<filename>neutron.conf</filename> used by neutron-server has the following
setting:</para>
<programlisting>allow_overlapping_ips=False</programlisting>
<para>and that the dhcp_agent.ini and l3_agent.ini have the following
setting:</para>
<programlisting>use_namespaces=False</programlisting>
<note><para>If the host does not support namespaces then the <systemitem class="service"
>neutron-l3-agent</systemitem> and <systemitem class="service"
>neutron-dhcp-agent</systemitem> should be run on different hosts. This
is due to the fact that there is no isolation between the IP addresses
created by the L3 agent and by the DHCP agent. By manipulating the routing
the user can ensure that these networks have access to one another.</para></note>
<para>If you run both L3 and DHCP services on the same node, you should enable
namespaces to avoid conflicts with routes:</para>
<programlisting>use_namespaces=True</programlisting>
</listitem>
</itemizedlist>
<itemizedlist><listitem>
<para><emphasis>No IPv6 support for L3 agent:</emphasis> The <systemitem class="
service">neutron-l3-agent</systemitem>, used
by many plugins to implement L3 forwarding, supports only IPv4 forwarding.
Currently, there are no errors provided if you configure IPv6 addresses via the
API.</para>
</listitem>
<listitem>
<para><emphasis>ZeroMQ support is experimental</emphasis>: Some agents, including
<systemitem class="service"
>neutron-dhcp-agent</systemitem>, <systemitem class="service"
>neutron-openvswitch-agent</systemitem>, and <systemitem class="service"
>neutron-linuxbridge-agent</systemitem> use
RPC to communicate. ZeroMQ is an available option in the configuration file, but
has not been tested and should be considered experimental. In particular, there
are believed to be issues with ZeroMQ and the dhcp agent.</para>
</listitem><listitem>
<para><emphasis>MetaPlugin is experimental</emphasis>: This release includes a
"MetaPlugin" that is intended to support multiple plugins at the same time for
different API requests, based on the content of those API requests. This
functionality has not been widely reviewed or tested by the core team, and
should be considered experimental until further validation is performed.</para>
</listitem>
</itemizedlist>
</para>
</section>
</section>

View File

@ -1,11 +1,8 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="ch_config">
<title>Required Configuration for OpenStack Identity &amp; Compute</title>
<para>To work with OpenStack Networking, you must configure and set up the OpenStack Identity
Service and the OpenStack Compute Service.</para>
<section xml:id="keystone">
<section xml:id="section_networking-config-identity"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>OpenStack Identity</title>
<procedure>
<title>To configure the OpenStack Identity Service for use with OpenStack
@ -109,8 +106,8 @@ catalog.$Region.network.internalURL = http://10.211.55.17:9696
<para>See the OpenStack Installation Guides for more details
about creating service entries and service users.
</para>
</section>
<section xml:id="nova_with_neutron">
<section xml:id="nova_with_neutron">
<title>OpenStack Compute</title>
<para>If OpenStack Networking is used, you must not run OpenStack
Compute's <systemitem class="service">nova-network</systemitem>
@ -158,7 +155,8 @@ catalog.$Region.network.internalURL = http://10.211.55.17:9696
you must adjust settings in the <filename>nova.conf</filename>
configuration file.
</para>
<section xml:id="nova_with_neutron_api">
</section>
<section xml:id="nova_with_neutron_api">
<title>Networking API &amp; and Credential Configuration</title>
<para>Each time a VM is provisioned or deprovisioned in
OpenStack Compute, <systemitem class="service">nova-*</systemitem>
@ -436,4 +434,3 @@ libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
</computeroutput> </screen>
</section>
</section>
</chapter>

View File

@ -56,7 +56,7 @@ format="PNG" />
<informalfigure>
<mediaobject>
<imageobject>
<imagedata fileref="figures/demo_multiple_dhcp_agents.png" contentwidth="6in"/>
<imagedata fileref="../../common/figures/demo_multiple_dhcp_agents.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
</informalfigure>

View File

@ -1,8 +1,8 @@
<?xml version= "1.0" encoding= "UTF-8"?>
<section xml:id= "section_networking-options-reference"
xmlns= "http://docbook.org/ns/docbook"
xmlns:xi= "http://www.w3.org/2001/XInclude"
xmlns:xlink= "http://www.w3.org/1999/xlink" version= "5.0">
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="section_networking-options-reference"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Networking Configuration Options</title>
<para>These options and descriptions were generated from the code in
the Networking service project which provides software defined networking

View File

@ -8,20 +8,15 @@
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
xmlns:ns="http://docbook.org/ns/docbook"
version="5.0">
<title>Networking Plugins</title>
<para>
OpenStack Networking introduces the concept of a plugin, which is a
back-end implementation of the OpenStack Networking API. A plugin
can use a variety of technologies to implement the logical API
requests. Some OpenStack Networking plugins might use basic Linux
VLANs and IP tables, while others might use more advanced technologies,
such as L2-in-L3 tunneling or OpenFlow.
The follow sections detail the configuration options for the various
plugins available.
</para>
<title>Networking plugins</title>
<para>OpenStack Networking introduces the concept of a plugin, which is a back-end implementation of
the OpenStack Networking API. A plugin can use a variety of technologies to
implement the logical API requests. Some OpenStack Networking plugins might use
basic Linux VLANs and IP tables, while others might use more advanced
technologies, such as L2-in-L3 tunneling or OpenFlow. The following sections
detail the configuration options for the various plugins available. </para>
<section xml:id="networking-plugin-bigswitch">
<title>BigSwitch Configuration Options</title>
<title>BigSwitch configuration options</title>
<xi:include href="../../common/tables/neutron-bigswitch.xml"/>
</section>
<section xml:id="networking-plugin-brocade">
@ -33,11 +28,11 @@ plugins available.
<xi:include href="../../common/tables/neutron-cisco.xml"/>
</section>
<section xml:id="networking-plugin-hyperv">
<title>CloudBase Hyper-V Configuration Options</title>
<title>CloudBase Hyper-V configuration options</title>
<xi:include href="../../common/tables/neutron-hyperv.xml"/>
</section>
<section xml:id="networking-plugin-linuxbridge">
<title>Linux Bridge Configuration Options</title>
<title>Linux bridge configuration options</title>
<xi:include href="../../common/tables/neutron-linuxbridge.xml"/>
</section>
<section xml:id="networking-plugin-mlnx">
@ -45,7 +40,7 @@ plugins available.
<xi:include href="../../common/tables/neutron-mlnx.xml"/>
</section>
<section xml:id="networking-plugin-meta">
<title>Meta Plugin Configuration Options</title>
<title>Meta Plugin configuration options</title>
<para>The Meta Plugin allows you to use multiple plugins at the same time.</para>
<xi:include href="../../common/tables/neutron-meta.xml"/>
</section>
@ -54,27 +49,27 @@ plugins available.
<xi:include href="../../common/tables/neutron-ml2.xml"/>
</section>
<section xml:id="networking-plugin-midonet">
<title>MidoNet Configuration Options</title>
<title>MidoNet configuration options</title>
<xi:include href="../../common/tables/neutron-midonet.xml"/>
</section>
<section xml:id="networking-plugin-nec">
<title>NEC Configuration Options</title>
<title>NEC configuration options</title>
<xi:include href="../../common/tables/neutron-nec.xml"/>
</section>
<section xml:id="networking-plugin-nicira">
<title>Nicira NVP Configuration Options</title>
<title>Nicira NVP configuration options</title>
<xi:include href="../../common/tables/neutron-nicira.xml"/>
</section>
<section xml:id="networking-plugin-openvswitch">
<title>Open vSwitch Configuration Options</title>
<title>Open vSwitch configuration options</title>
<xi:include href="../../common/tables/neutron-openvswitch.xml"/>
</section>
<section xml:id="networking-plugin-plumgrid">
<title>PLUMgrid Configuration Options</title>
<title>PLUMgrid configuration options</title>
<xi:include href="../../common/tables/neutron-plumgrid.xml"/>
</section>
<section xml:id="networking-plugin-ryu">
<title>Ryu Configuration Options</title>
<title>Ryu configuration options</title>
<xi:include href="../../common/tables/neutron-ryu.xml"/>
</section>

View File

@ -1,11 +1,15 @@
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="ch_under_the_hood">
<title>Under the Hood</title>
<para>This chapter describes two networking scenarios and how the Open vSwitch plugin and
the Linux bridging plugin implement these scenarios.</para>
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="section_networking-scenarios"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Networking scenarios</title>
<para>This chapter describes two networking scenarios and how the Open vSwitch plug-in and the
Linux bridging plug-in implement these scenarios.</para>
<section xml:id="under_the_hood_openvswitch">
<?dbhtml stop-chunking?>
<title>Open vSwitch</title>
<para>This section describes how the Open vSwitch plugin implements the OpenStack
<para>This section describes how the Open vSwitch plug-in implements the OpenStack
Networking abstractions.</para>
<section xml:id="under_the_hood_openvswitch_configuration">
<title>Configuration</title>
@ -31,7 +35,7 @@ bridge_mappings = physnet2:br-eth1</programlisting></para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/under-the-hood-scenario-1.png" contentwidth="6in"/>
<imagedata fileref="../../common/figures/under-the-hood-scenario-1.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>Under the <literal>service</literal> tenant, create the shared router, define the
@ -72,7 +76,7 @@ bridge_mappings = physnet2:br-eth1</programlisting></para>
</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/under-the-hood-scenario-1-ovs-compute.png" contentwidth="6in"/>
<imagedata fileref="../../common/figures/under-the-hood-scenario-1-ovs-compute.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
@ -166,7 +170,7 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1</programlisting>
The following figure shows the network devices on the network host:</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/under-the-hood-scenario-1-ovs-network.png" contentwidth="6in"/>
<imagedata fileref="../../common/figures/under-the-hood-scenario-1-ovs-network.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>As on the compute host, there is an Open vSwitch integration bridge
@ -272,7 +276,7 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1</programlisting>
</itemizedlist></para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/under-the-hood-scenario-1-ovs-netns.png" contentwidth="6in"/>
<imagedata fileref="../../common/figures/under-the-hood-scenario-1-ovs-netns.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
@ -288,7 +292,7 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1</programlisting>
</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/under-the-hood-scenario-2.png" contentwidth="6in"/>
<imagedata fileref="../../common/figures/under-the-hood-scenario-2.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>Under the <literal>service</literal> tenant, define the public
@ -330,7 +334,7 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1</programlisting>
</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/under-the-hood-scenario-2-ovs-compute.png" contentwidth="6in"/>
<imagedata fileref="../../common/figures/under-the-hood-scenario-2-ovs-compute.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<note><para>The Compute host configuration resembles the
@ -345,7 +349,7 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1</programlisting>
scenario.</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/under-the-hood-scenario-2-ovs-network.png" contentwidth="6in"/>
<imagedata fileref="../../common/figures/under-the-hood-scenario-2-ovs-network.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>In this configuration, the network namespaces are
@ -354,7 +358,7 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1</programlisting>
</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/under-the-hood-scenario-2-ovs-netns.png" contentwidth="6in"/>
<imagedata fileref="../../common/figures/under-the-hood-scenario-2-ovs-netns.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>In this scenario, there are four network namespaces
@ -369,11 +373,9 @@ bridge_mappings = physnet1:br-ex,physnet2:br-eth1</programlisting>
</section>
<section xml:id="under_the_hood_linuxbridge">
<title>Linux bridge</title>
<para>This section describes how the Linux bridge plugin
implements the OpenStack Networking abstractions. For
information about DHCP and L3 agents, see
<xref linkend="under_the_hood_openvswitch_scenario1" />.
</para>
<para>This section describes how the Linux bridge plug-in implements the OpenStack
Networking abstractions. For information about DHCP and L3 agents, see <xref
linkend="under_the_hood_openvswitch_scenario1"/>. </para>
<section xml:id="under_the_hood_linuxbridge_configuration">
<title>Configuration</title>
<para>This example uses VLAN isolation on the switches to isolate tenant networks. This configuration labels the physical
@ -398,7 +400,7 @@ physical_interface_mappings = physnet2:eth1</programlisting></para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/under-the-hood-scenario-1.png" contentwidth="6in"/>
<imagedata fileref="../../common/figures/under-the-hood-scenario-1.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>Under the <literal>service</literal> tenant, create the shared router, define the
@ -438,7 +440,7 @@ physical_interface_mappings = physnet2:eth1</programlisting></para>
</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/under-the-hood-scenario-1-linuxbridge-compute.png" contentwidth="6in"/>
<imagedata fileref="../../common/figures/under-the-hood-scenario-1-linuxbridge-compute.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
@ -476,14 +478,14 @@ physical_interface_mappings = physnet2:eth1</programlisting></para>
<para>The following figure shows the network devices on the network host.</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/under-the-hood-scenario-1-linuxbridge-network.png" contentwidth="6in"/>
<imagedata fileref="../../common/figures/under-the-hood-scenario-1-linuxbridge-network.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>The following figure shows how the Linux bridge plugin uses network namespaces to
provide isolation.</para><note><para>veth pairs form connections between the
<para>The following figure shows how the Linux bridge plug-in uses network namespaces to
provide isolation.</para><note><para>veth pairs form connections between the
Linux bridges and the network namespaces.</para></note><mediaobject>
<imageobject>
<imagedata fileref="figures/under-the-hood-scenario-1-linuxbridge-netns.png" contentwidth="6in"/>
<imagedata fileref="../../common/figures/under-the-hood-scenario-1-linuxbridge-netns.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
</section>
@ -495,7 +497,7 @@ physical_interface_mappings = physnet2:eth1</programlisting></para>
Internet.</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/under-the-hood-scenario-2.png" contentwidth="6in"/>
<imagedata fileref="../../common/figures/under-the-hood-scenario-2.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>Under the <literal>service</literal> tenant, define the public
@ -538,7 +540,7 @@ physical_interface_mappings = physnet2:eth1</programlisting></para>
</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/under-the-hood-scenario-2-linuxbridge-compute.png" contentwidth="6in"/>
<imagedata fileref="../../common/figures/under-the-hood-scenario-2-linuxbridge-compute.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<note><para>The configuration on the compute host is very similar to the configuration in scenario 1. The
@ -551,7 +553,7 @@ physical_interface_mappings = physnet2:eth1</programlisting></para>
scenario.</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/under-the-hood-scenario-2-linuxbridge-network.png" contentwidth="6in"/>
<imagedata fileref="../../common/figures/under-the-hood-scenario-2-linuxbridge-network.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>The main difference between the configuration in this scenario and the previous one
@ -559,7 +561,7 @@ physical_interface_mappings = physnet2:eth1</programlisting></para>
across the two subnets, as shown in the following figure.</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/under-the-hood-scenario-2-linuxbridge-netns.png" contentwidth="6in"/>
<imagedata fileref="../../common/figures/under-the-hood-scenario-2-linuxbridge-netns.png" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>In this scenario, there are four network namespaces
@ -572,4 +574,4 @@ physical_interface_mappings = physnet2:eth1</programlisting></para>
</section>
</section>
</section>
</chapter>
</section>

View File

@ -1,13 +1,11 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_install">
<title>OpenStack Networking Installation</title>
<para>Learn how to install and get the OpenStack Networking service
up and running.</para>
<section xml:id="install_prereqs">
<title>Initial Prerequisites</title>
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_installing-openstack-networking">
<title>Installing OpenStack Networking Service</title>
<section xml:id="section_install_prereqs">
<title>Initial prerequisites</title>
<para>
<itemizedlist>
<listitem>
@ -24,8 +22,8 @@
</itemizedlist>
</para>
</section>
<section xml:id="install_ubuntu">
<title>Install Packages (Ubuntu)</title>
<section xml:id="section_install_ubuntu">
<title>Install packages (Ubuntu)</title>
<note>
<para>This procedure uses the Cloud Archive for Ubuntu. You can
read more about it at <link
@ -788,7 +786,7 @@ enabled = True</programlisting>
<systemitem>/etc/neutron</systemitem> directory.</para>
</section>
</section>
<section xml:id="install_fedora">
<section xml:id="section_install_fedora">
<title>Installing Packages (Fedora)</title>
<para>You can retrieve the OpenStack packages for Fedora from:
<link
@ -1009,4 +1007,150 @@ enabled = True</programlisting>
<systemitem>/etc/neutron</systemitem> directory.</para>
</section>
</section>
</chapter>
<section xml:id="section_networking-demo-setup">
<title>Set up for deployment use cases</title>
<para>This section describes how to configure the OpenStack
Networking service and its components for some typical use
cases.</para>
<xi:include href="section_networking-single-flat.xml"/>
<xi:include href="section_networking-provider-router-with-private_networks.xml"/>
<xi:include href="section_networking-pertenant-routers-with-private-networks.xml"/>
</section>
<section xml:id="section_networking-use-cases">
<title>OpenStack Networking Deployment Use Cases</title>
<para>
The following common-use cases for OpenStack Networking are
not exhaustive, but can be combined to create more complex use cases.
</para>
<section xml:id="section_use-cases-single-flat">
<title>Use Case: Single Flat Network</title>
<para>In the simplest use case, a single OpenStack Networking network is created. This is a
"shared" network, meaning it is visible to all tenants via the OpenStack Networking
API. Tenant VMs have a single NIC, and receive
a fixed IP address from the subnet(s) associated with that network.
This use case essentially maps to the FlatManager
and FlatDHCPManager models provided by OpenStack Compute. Floating IPs are not
supported.</para>
<para>This network type is often created by the OpenStack administrator
to map directly to an existing physical network in the data center (called a
"provider network"). This allows the provider to use a physical
router on that data center network as the gateway for VMs to reach
the outside world. For each subnet on an external network, the gateway
configuration on the physical router must be manually configured
outside of OpenStack.</para>
<para>
<mediaobject>
<imageobject>
<imagedata scale="80" fileref="../common/figures/UseCase-SingleFlat.png"/>
</imageobject>
</mediaobject>
<!--Image source link: https://docs.google.com/a/nicira.com/drawings/d/1Jb6iSoBo4G7fv7i2EMpYTMTxesLPmEPKIbI7sVbhhqY/edit -->
</para>
</section>
<?hard-pagebreak?>
<section xml:id="section_use-cases-multi-flat">
<title>Use Case: Multiple Flat Network</title>
<para>
This use case is similar to the above Single Flat Network use case,
except that tenants can see multiple shared networks via the OpenStack Networking API
and can choose which network (or networks) to plug into.
</para>
<para>
<mediaobject>
<imageobject>
<imagedata scale="60" fileref="../common/figures/UseCase-MultiFlat.png"/>
</imageobject>
</mediaobject>
<!--Image source link: https://docs.google.com/a/nicira.com/drawings/d/14ayGsyunW_P-wvY8OiueE407f7540JD3VsWUH18KHvU/edit -->
</para>
</section>
<?hard-pagebreak?>
<section xml:id="section_use-cases-mixed">
<title>Use Case: Mixed Flat and Private Network</title>
<para>
This use case is an extension of the above Flat Network use cases.
In addition to being able to see one or more shared networks via
the OpenStack Networking API, tenants can also have access to private per-tenant
networks (only visible to tenant users).
</para>
<para>
Created VMs can have NICs on any of the shared networks and/or any of the private networks
belonging to the tenant. This enables the creation of "multi-tier"
topologies using VMs with multiple NICs. It also supports a model where
a VM acting as a gateway can provide services such as routing, NAT, or
load balancing.
</para>
<para>
<mediaobject>
<imageobject>
<imagedata scale="55" fileref="../common/figures/UseCase-MixedFlatPrivate.png"/>
</imageobject>
</mediaobject>
<!--Image source link: https://docs.google.com/a/nicira.com/drawings/d/1efSqR6KA2gv-OKl5Rl-oV_zwgYP8mgQHFP2DsBj5Fqo/edit -->
</para>
</section>
<?hard-pagebreak?>
<section xml:id="section_use-cases-single-router">
<title>Use Case: Provider Router with Private Networks</title>
<para>
This use case provides each tenant with one or more private networks, which
connect to the outside world via an OpenStack Networking router.
When each tenant gets exactly one network, this architecture maps to the same
logical topology as the VlanManager in OpenStack Compute (although of course, OpenStack Networking doesn't
require VLANs). Using the OpenStack Networking API, the tenant can only see a
network for each private network assigned to that tenant. The router
object in the API is created and owned by the cloud administrator.
</para>
<para>
This model supports giving VMs public addresses using
"floating IPs", in which the router maps public addresses from the
external network to fixed IPs on private networks. Hosts without floating
IPs can still create outbound connections to the external network, because
the provider router performs SNAT to the router's external IP. The
IP address of the physical router is used as the <literal>gateway_ip</literal> of the
external network subnet, so the provider has a default router for
Internet traffic.
</para>
<para>
The router provides L3 connectivity between private networks, meaning
that different tenants can reach each other's instances unless additional
filtering is used (for example, security groups). Because there is only a single
router, tenant networks cannot use overlapping IPs. Thus, it is likely
that the administrator would create the private networks on behalf of the tenants.
</para>
<para>
<mediaobject>
<imageobject>
<imagedata scale="55" fileref="../common/figures/UseCase-SingleRouter.png"/>
</imageobject>
</mediaobject>
<!--Image source link: https://docs.google.com/a/nicira.com/drawings/d/1DKxeZZXml_fNZHRoGPKkC7sGdkPJZCtWytYZqHIp_ZE/edit -->
</para>
</section>
<?hard-pagebreak?>
<section xml:id="section_use-cases-tenant-router">
<title>Use Case: Per-tenant Routers with Private Networks</title>
<para>
This use case represents a more advanced router scenario in which each tenant gets
at least one router, and potentially has access to the OpenStack Networking API to
create additional routers. The tenant can create their own networks,
potentially uplinking those networks to a router. This model enables
tenant-defined, multi-tier applications, with
each tier being a separate network behind the router. Since there are
multiple routers, tenant subnets can overlap without conflicting,
since access to external networks all happens via SNAT or Floating IPs.
Each router uplink and floating IP is allocated from the external network
subnet.
</para>
<para>
<mediaobject>
<imageobject>
<imagedata scale="55"
fileref="../common/figures/UseCase-MultiRouter.png" align="left"/>
</imageobject>
</mediaobject>
<!--Image source link: https://docs.google.com/a/nicira.com/drawings/d/1mmQc8cBUoTEfEns-ehIyQSTvOrjUdl5xeGDv9suVyAY/edit -->
</para>
</section>
</section>
</chapter>

View File

@ -1,15 +1,16 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="app_demo_routers_with_private_networks">
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="section_networking-routers-with-private-networks">
<title>Per-tenant Routers with Private Networks</title>
<para>This section describes how to install the OpenStack Networking service
and its components for the "<link
linkend="use_cases_tenant_router">Use Case: Per-tenant Routers with Private Networks
linkend="section_use-cases-tenant-router">Use Case: Per-tenant Routers with Private Networks
</link>".</para>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata contentwidth="6in" fileref="figures/UseCase-MultiRouter.png"/>
<imagedata contentwidth="6in" fileref="../common/figures/UseCase-MultiRouter.png"/>
</imageobject>
</mediaobject>
</informalfigure>
@ -20,7 +21,7 @@
<informalfigure>
<mediaobject>
<imageobject>
<imagedata contentwidth="6in" fileref="figures/demo_routers_with_private_networks.png"/>
<imagedata contentwidth="6in" fileref="../common/figures/demo_routers_with_private_networks.png"/>
</imageobject>
</mediaobject>
</informalfigure>
@ -43,9 +44,8 @@
</listitem>
</itemizedlist>
<note><para>Because this example runs a DHCP agent and L3 agent on one node, the
<literal>use_namespace</literal> option must be set to <literal>True</literal> in
the configuration file for each agent. The default is
<literal>True</literal>. See <xref linkend="ch_limitations"/>.</para></note>
<literal>use_namespace</literal> option must be set to <literal>True</literal> in
the configuration file for each agent. The default is <literal>True</literal>.</para></note>
<para>Below is a description of the nodes in the setup:
<informaltable rules="all" width="100%">
<col width="20%"/>
@ -152,8 +152,8 @@
</listitem>
<listitem>
<para>Create database <emphasis role="bold">ovs_neutron</emphasis>.
See the section on the <link linkend="arch_overview">Core
Plugins</link> for the exact details.</para>
Refer back <link linkend="section_install_prereqs">Initial
prerequisites</link> to get started.</para>
</listitem>
<listitem>
<para>Update the OpenStack Networking configuration file, <filename>

View File

@ -2,11 +2,12 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="app_demo_single_router">
xml:id="section_networking-provider-router_with-provate-networks">
<title>Provider Router with Private Networks</title>
<para>This section describes how to install the OpenStack
Networking service and its components for the "Use Case:
Provider Router with Private Networks."</para>
Networking service and its components for the "<link
linkend="section_use-cases-single-router">Use Case:
Provider Router with Private Networks.</link></para>
<para>We will follow the <link
xlink:href="http://docs.openstack.org/grizzly/basic-install/content/basic-install_intro.html"
><citetitle>Basic Install Guide</citetitle></link> except for the Neutron,
@ -17,11 +18,9 @@
separation instead.</para>
<para>The following figure shows the setup:</para>
<note>
<para>Because you run the DHCP agent and L3 agent on one node,
you must set <literal>use_namespaces</literal> to
<literal>True</literal> (which is the default) in both
agents' configuration files. See <link
linkend="ch_limitations">Limitations</link>.</para>
<para>Because you run the DHCP agent and L3 agent on one node, you must set
<literal>use_namespaces</literal> to <literal>True</literal> (which is the default)
in both agents' configuration files. </para>
</note>
<informalfigure>
<mediaobject>

View File

@ -1,11 +1,11 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="app_demo_flat">
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="section_network-single-flat">
<title>Single Flat Network</title>
<para>This section describes how to install the OpenStack Networking service
and its components for the "<link
linkend="use_cases_single_flat">Use Case: Single Flat Network
</link>".</para>
<para>This section describes how to install the OpenStack Networking service and its components
for the <link linkend="section_use-cases-single-flat">Use
Case: Single Flat Network</link>.</para>
<para>The diagram below shows the setup. For simplicity all of the
nodes should have one interface for management traffic and one
or more interfaces for traffic to and from VMs. The management
@ -15,7 +15,7 @@
supported plugin and its agent.</para>
<mediaobject>
<imageobject>
<imagedata scale="60" fileref="figures/demo_flat_install.png"/>
<imagedata scale="60" fileref="../common/figures/demo_flat_install.png"/>
</imageobject>
</mediaobject>
<para>Here are some nodes in the setup.</para>
@ -122,8 +122,8 @@
</listitem>
<listitem>
<para>Create database <emphasis role="bold">ovs_neutron</emphasis>.
See the section on the <link linkend="arch_overview">Core
Plugins</link> for the exact details.</para>
Refer back to the <link linkend="section_install_prereqs"
>Initial prerequisites</link> to get started.</para>
</listitem>
<listitem>
<para>Update the OpenStack Networking configuration file, <filename>

View File

@ -37,4 +37,5 @@
</note>
<xi:include href="../common/section_nova_cli_quotas.xml"/>
<xi:include href="section_cinder_cli_quotas.xml"/>
<xi:include href="../common/section_networking-quotas.xml"/>
</section>