Merge "Replace references to Nicira NVP with VMware NSX"

This commit is contained in:
Jenkins 2014-02-12 15:22:02 +00:00 committed by Gerrit Code Review
commit c9a1db5b24
13 changed files with 216 additions and 215 deletions

View File

@ -50,7 +50,7 @@
<itemizedlist>
<listitem>
<para>Adds options for tuning operational
status synchronization in the NVP
status synchronization in the NSX
plug-in.</para>
</listitem>
</itemizedlist>

View File

@ -722,8 +722,8 @@
Networking, the Networking plug-in must
implement the security group API. The
following plug-ins currently implement this:
ML2, Nicira NVP, Open vSwitch, Linux Bridge,
NEC, and Ryu.</para>
ML2, Open vSwitch, Linux Bridge,
NEC, Ryu, and VMware NSX.</para>
</listitem>
<listitem>
<para>You must configure the correct firewall
@ -1428,8 +1428,8 @@
two instances to enable fast data plane failover.</para>
<note>
<para>The allowed-address-pairs extension is currently
only supported by these plug-ins: ML2, Nicira NVP, and
Open vSwitch.</para>
only supported by these plug-ins: ML2, Open vSwitch,
and VMware NSX.</para>
</note>
<section xml:id="section_allowed_address_pairs_workflow">
<title>Basic allowed address pairs operations</title>
@ -1471,13 +1471,13 @@
<para>Each vendor can choose to implement additional API
extensions to the core API. This section describes the
extensions for each plug-in.</para>
<section xml:id="section_nicira_extensions">
<title>Nicira NVP extensions</title>
<para>These sections explain Nicira NVP plug-in
<section xml:id="section_vmware_extensions">
<title>VMware NSX extensions</title>
<para>These sections explain NSX plug-in
extensions.</para>
<section xml:id="section_nicira_nvp_plugin_qos_extension">
<title>Nicira NVP QoS extension</title>
<para>The Nicira NVP QoS extension rate-limits network
<section xml:id="section_vmware_nsx_plugin_qos_extension">
<title>VMware NSX QoS extension</title>
<para>The VMware NSX QoS extension rate-limits network
ports to guarantee a specific amount of bandwidth
for each port. This extension, by default, is only
accessible by a tenant with an admin role but is
@ -1508,10 +1508,10 @@
subsequently created but are not added to existing
ports.</para>
<section
xml:id="section_nicira_nvp_qos_api_abstractions">
<title>Nicira NVP QoS API abstractions</title>
xml:id="section_vmware_nsx_qos_api_abstractions">
<title>VMware NSX QoS API abstractions</title>
<table rules="all">
<caption>Nicira NVP QoS attributes</caption>
<caption>VMware NSX QoS attributes</caption>
<col width="20%"/>
<col width="20%"/>
<col width="20%"/>
@ -1582,13 +1582,13 @@
</tbody>
</table>
</section>
<section xml:id="nicira_nvp_qos_walk_through">
<title>Basic Nicira NVP QoS operations</title>
<section xml:id="vmware_nsx_qos_walk_through">
<title>Basic VMware NSX QoS operations</title>
<para>This table shows example neutron commands
that enable you to complete basic queue
operations:</para>
<table rules="all">
<caption>Basic Nicira NVP QoS
<caption>Basic VMware NSX QoS
operations</caption>
<col width="40%"/>
<col width="60%"/>
@ -1640,25 +1640,25 @@
</table>
</section>
</section>
<section xml:id="section_nicira_nvp_provider_extension">
<title>Nicira NVP provider networks extension</title>
<section xml:id="section_vmware_nsx_provider_extension">
<title>VMware NSX provider networks extension</title>
<para>Provider networks can be implemented in
different ways by the underlying NVP
different ways by the underlying NSX
platform.</para>
<para>The <emphasis>FLAT</emphasis> and
<emphasis>VLAN</emphasis> network types use
bridged transport connectors. These network types
enable the attachment of large number of ports. To
handle the increased scale, the NVP plug-in can
handle the increased scale, the NSX plug-in can
back a single Openstack Network with a chain of
NVP logical switches. You can specify the maximum
NSX logical switches. You can specify the maximum
number of ports on each logical switch in this
chain on the
<literal>max_lp_per_bridged_ls</literal>
parameter, which has a default value of
5,000.</para>
<para>The recommended value for this parameter varies
with the NVP version running in the back-end, as
with the NSX version running in the back-end, as
shown in the following table.</para>
<table rules="all">
<caption>Recommended values for
@ -1667,7 +1667,7 @@
<col width="50%"/>
<thead>
<tr>
<td>NVP version</td>
<td>NSX version</td>
<td>Recommended Value</td>
</tr>
</thead>
@ -1690,43 +1690,43 @@
</tr>
</tbody>
</table>
<para>In addition to these network types, the NVP
<para>In addition to these network types, the NSX
plug-in also supports a special
<emphasis>l3_ext</emphasis> network type,
which maps external networks to specific NVP
which maps external networks to specific NSX
gateway services as discussed in the next
section.</para>
</section>
<section xml:id="section_nicira_nvp_plugin_l3_extension">
<title>Nicira NVP L3 extension</title>
<para>NVP exposes its L3 capabilities through gateway
<section xml:id="section_vmware_nsx_plugin_l3_extension">
<title>VMware NSX L3 extension</title>
<para>NSX exposes its L3 capabilities through gateway
services which are usually configured out of band
from OpenStack. To use NVP with L3 capabilities,
first create a L3 gateway service in the NVP
from OpenStack. To use NSX with L3 capabilities,
first create a L3 gateway service in the NSX
Manager. Next, in <filename>
/etc/neutron/plugins/nicira/nvp.ini</filename>
/etc/neutron/plugins/vmware/nsx.ini</filename>
set <literal>default_l3_gw_service_uuid</literal>
to this value. By default, routers are mapped to
this gateway service.</para>
<section xml:id="section_nicira_l3_walk_through">
<title>Nicira NVP L3 extension operations</title>
<section xml:id="section_vmware_l3_walk_through">
<title>VMware NSX L3 extension operations</title>
<para>Create external network and map it to a
specific NVP gateway service:</para>
specific NSX gateway service:</para>
<screen><prompt>#</prompt> <userinput>neutron net-create public --router:external=True --provider:network_type l3_ext \
--provider:physical_network &lt;L3-Gateway-Service-UUID&gt;</userinput></screen>
<para>Terminate traffic on a specific VLAN from a
NVP gateway service:</para>
NSX gateway service:</para>
<screen><prompt>#</prompt> <userinput>neutron net-create public --router:external=True --provider:network_type l3_ext \
--provider:physical_network &lt;L3-Gateway-Service-UUID&gt; -provider:segmentation_id &lt;VLAN_ID&gt;</userinput></screen>
</section>
</section>
<section xml:id="section_nicira_nvp_plugin_status_sync">
<section xml:id="section_vmware_nsx_plugin_status_sync">
<title>Operational status synchronization in the
Nicira NVP plug-in</title>
<para>Starting with the Havana release, the Nicira NVP
VMware NSX plug-in</title>
<para>Starting with the Havana release, the VMware NSX
plug-in provides an asynchronous mechanism for
retrieving the operational status for neutron
resources from the NVP back-end; this applies to
resources from the NSX back-end; this applies to
<emphasis>network</emphasis>,
<emphasis>port</emphasis>, and
<emphasis>router</emphasis> resources.</para>
@ -1740,14 +1740,14 @@
consistently improved.</para>
<para>Data to retrieve from the back-end are divided
in chunks in order to avoid expensive API
requests; this is achieved leveraging NVP APIs
requests; this is achieved leveraging NSX APIs
response paging capabilities. The minimum chunk
size can be specified using a configuration
option; the actual chunk size is then determined
dynamically according to: total number of
resources to retrieve, interval between two
synchronization task runs, minimum delay between
two subsequent requests to the NVP
two subsequent requests to the NSX
back-end.</para>
<para>The operational status synchronization can be
tuned or disabled using the configuration options
@ -1756,7 +1756,7 @@
cases.</para>
<table rules="all">
<caption>Configuration options for tuning
operational status synchronization in the NVP
operational status synchronization in the NSX
plug-in</caption>
<col width="12%"/>
<col width="8%"/>
@ -1775,7 +1775,7 @@
<tbody>
<tr>
<td><literal>state_sync_interval</literal></td>
<td><literal>nvp_sync</literal></td>
<td><literal>nsx_sync</literal></td>
<td>120 seconds</td>
<td>Integer; no constraint.</td>
<td>Interval in seconds between two run of
@ -1790,7 +1790,7 @@
</tr>
<tr>
<td><literal>max_random_sync_delay</literal></td>
<td><literal>nvp_sync</literal></td>
<td><literal>nsx_sync</literal></td>
<td>0 seconds</td>
<td>Integer. Must not exceed
<literal>min_sync_req_delay</literal></td>
@ -1802,20 +1802,20 @@
</tr>
<tr>
<td><literal>min_sync_req_delay</literal></td>
<td><literal>nvp_sync</literal></td>
<td><literal>nsx_sync</literal></td>
<td>10 seconds</td>
<td>Integer. Must not exceed
<literal>state_sync_interval</literal>.</td>
<td>The value of this option can be tuned
according to the observed load on the
NVP controllers. Lower values will
NSX controllers. Lower values will
result in faster synchronization, but
might increase the load on the
controller cluster.</td>
</tr>
<tr>
<td><literal>min_chunk_size</literal></td>
<td><literal>nvp_sync</literal></td>
<td><literal>nsx_sync</literal></td>
<td>500 resources</td>
<td>Integer; no constraint.</td>
<td>Minimum number of resources to
@ -1836,12 +1836,12 @@
</tr>
<tr>
<td><literal>always_read_status</literal></td>
<td><literal>nvp_sync</literal></td>
<td><literal>nsx_sync</literal></td>
<td>False</td>
<td>Boolean; no constraint.</td>
<td>When this option is enabled, the
operational status will always be
retrieved from the NVP back-end ad
retrieved from the NSX back-end ad
every <literal>GET</literal> request.
In this case it is advisable to
disable the synchronization task.</td>
@ -1851,7 +1851,7 @@
<para>When running multiple OpenStack Networking
server instances, the status synchronization task
should not run on every node; doing so sends
unnecessary traffic to the NVP back-end and
unnecessary traffic to the NSX back-end and
performs unnecessary DB operations. Set the
<option>state_sync_interval</option>
configuration option to a non-zero value
@ -1859,7 +1859,7 @@
status synchronization.</para>
<para>The <parameter>fields=status</parameter>
parameter in Networking API requests always
triggers an explicit query to the NVP back end,
triggers an explicit query to the NSX back end,
even when you enable asynchronous state
synchronization. For example, <code>GET
/v2.0/networks/&lt;net-id>?fields=status&amp;fields=name</code>.</para>

View File

@ -169,15 +169,6 @@
xlink:href="http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin"
>http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin</link></td>
</tr>
<tr>
<td><emphasis role="bold">Nicira NVP
Plug-in</emphasis></td>
<td>This guide and <link
xlink:href="http://www.vmware.com/products/datacenter-virtualization/nicira.html"
>NVP Product Overview</link>, <link
xlink:href="http://www.nicira.com/support"
>NVP Product Support</link></td>
</tr>
<tr>
<td><emphasis role="bold">Open vSwitch
Plug-in</emphasis></td>
@ -196,6 +187,16 @@
xlink:href="https://github.com/osrg/ryu/wiki/OpenStack"
>https://github.com/osrg/ryu/wiki/OpenStack</link></td>
</tr>
<tr>
<!-- TODO: update support link, when available -->
<td><emphasis role="bold">VMware NSX
Plug-in</emphasis></td>
<td>This guide and <link
xlink:href="http://www.vmware.com/nsx"
>NSX Product Overview</link>, <link
xlink:href="http://www.nicira.com/support"
>NSX Product Support</link></td>
</tr>
</tbody>
</table>
<para>Plug-ins can have different properties for hardware
@ -312,14 +313,6 @@
<td/>
<td/>
</tr>
<tr>
<td>Nicira NVP</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td/>
<td/>
</tr>
<tr>
<td>Open vSwitch</td>
<td>Yes</td>
@ -344,6 +337,14 @@
<td/>
<td/>
</tr>
<tr>
<td>VMware NSX</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td/>
<td/>
</tr>
</tbody>
</table>
<section xml:id="section_plugin-config">
@ -491,37 +492,37 @@ local_ip=&lt;data-net-IP-address-of-node&gt;</programlisting>
</step>
</procedure>
</section>
<section xml:id="nvp_plugin">
<title>Configure Nicira NVP plug-in</title>
<section xml:id="nsx_plugin">
<title>Configure NSX plug-in</title>
<procedure>
<title>To configure OpenStack Networking to use
the NVP plug-in</title>
the NSX plug-in</title>
<para>While the instructions in this section refer
to the Nicira NVP platform, they also apply to
VMware NSX.</para>
to the VMware NSX platform, this is formerly
known as Nicira NVP.</para>
<step>
<para>Install the NVP plug-in, as
<para>Install the NSX plug-in, as
follows:</para>
<screen><prompt>#</prompt> <userinput>sudo apt-get install neutron-plugin-nicira</userinput></screen>
<screen><prompt>#</prompt> <userinput>sudo apt-get install neutron-plugin-vmware</userinput></screen>
</step>
<step>
<para>Edit
<filename>/etc/neutron/neutron.conf</filename>
and set:</para>
<programlisting language="ini">core_plugin = neutron.plugins.nicira.NeutronPlugin.NvpPluginV2</programlisting>
<programlisting language="ini">core_plugin = neutron.plugins.vmware.NsxPlugin</programlisting>
<para>Example
<filename>neutron.conf</filename> file
for NVP:</para>
<programlisting language="ini">core_plugin = neutron.plugins.nicira.NeutronPlugin.NvpPluginV2
for NSX:</para>
<programlisting language="ini">core_plugin = neutron.plugins.vmware.NsxPlugin
rabbit_host = 192.168.203.10
allow_overlapping_ips = True</programlisting>
</step>
<step>
<para>To configure the NVP controller cluster
<para>To configure the NSX controller cluster
for the Openstack Networking Service,
locate the <literal>[default]</literal>
section in the
<filename>/etc/neutron/plugins/nicira/nvp.ini</filename>
<filename>/etc/neutron/plugins/vmware/nsx.ini</filename>
file, and add the following entries (for
database configuration, see <link
xlink:href="http://docs.openstack.org/havana/install-guide/install/apt/content/neutron-install-network-node.html"
@ -533,22 +534,22 @@ allow_overlapping_ips = True</programlisting>
<para>To establish and configure the
connection with the controller
cluster you must set some
parameters, including NVP API
parameters, including NSX API
endpoints, access credentials, and
settings for HTTP redirects and
retries in case of connection
failures:</para>
<programlisting language="ini">nvp_user = &lt;admin user name>
nvp_password = &lt;password for nvp_user>
req_timeout = &lt;timeout in seconds for NVP_requests> # default 30 seconds
<programlisting language="ini">nsx_user = &lt;admin user name>
nsx_password = &lt;password for nsx_user>
req_timeout = &lt;timeout in seconds for NSX_requests> # default 30 seconds
http_timeout = &lt;tiemout in seconds for single HTTP request> # default 10 seconds
retries = &lt;number of HTTP request retries> # default 2
redirects = &lt;maximum allowed redirects for a HTTP request> # default 3
nvp_controllers = &lt;comma separated list of API endpoints></programlisting>
nsx_controllers = &lt;comma separated list of API endpoints></programlisting>
<para>To ensure correct operations,
the <literal>nvp_user</literal>
the <literal>nsx_user</literal>
user must have administrator
credentials on the NVP
credentials on the NSX
platform.</para>
<para>A controller API endpoint
consists of the IP address and port
@ -558,7 +559,7 @@ nvp_controllers = &lt;comma separated list of API endpoints></programlisting>
up to the user to ensure that all
these endpoints belong to the same
controller cluster. The Openstack
Networking Nicira NVP plug-in does
Networking VMware NSX plug-in does
not perform this check, and results
might be unpredictable.</para>
<para>When you specify multiple API
@ -567,10 +568,10 @@ nvp_controllers = &lt;comma separated list of API endpoints></programlisting>
various API endpoints.</para>
</listitem>
<listitem>
<para>The UUID of the NVP Transport
<para>The UUID of the NSX Transport
Zone that should be used by default
when a tenant creates a network.
You can get this value from the NVP
You can get this value from the NSX
Manager's Transport Zones
page:</para>
<programlisting language="ini">default_tz_uuid = &lt;uuid_of_the_transport_zone&gt;</programlisting>
@ -580,12 +581,12 @@ nvp_controllers = &lt;comma separated list of API endpoints></programlisting>
<warning>
<para>Ubuntu packaging currently
does not update the Neutron init
script to point to the NVP
script to point to the NSX
configuration file. Instead, you
must manually update
<filename>/etc/default/neutron-server</filename>
to add this line:</para>
<programlisting language="ini">NEUTRON_PLUGIN_CONFIG = /etc/neutron/plugins/nicira/nvp.ini</programlisting>
<programlisting language="ini">NEUTRON_PLUGIN_CONFIG = /etc/neutron/plugins/vmware/nsx.ini</programlisting>
</warning>
</listitem>
</itemizedlist>
@ -597,40 +598,40 @@ nvp_controllers = &lt;comma separated list of API endpoints></programlisting>
<screen><prompt>#</prompt> <userinput>sudo service neutron-server restart</userinput></screen>
</step>
</procedure>
<para>Example <filename>nvp.ini</filename>
<para>Example <filename>nsx.ini</filename>
file:</para>
<programlisting language="ini">[DEFAULT]
default_tz_uuid = d3afb164-b263-4aaa-a3e4-48e0e09bb33c
default_l3_gw_service_uuid=5c8622cc-240a-40a1-9693-e6a5fca4e3cf
nvp_user=admin
nvp_password=changeme
nvp_controllers=10.127.0.100,10.127.0.200:8888</programlisting>
nsx_user=admin
nsx_password=changeme
nsx_controllers=10.127.0.100,10.127.0.200:8888</programlisting>
<note>
<para>To debug <filename>nvp.ini</filename>
<para>To debug <filename>nsx.ini</filename>
configuration issues, run this command from
the host that runs <systemitem class="service"
>neutron-server</systemitem>:</para>
<screen><prompt>#</prompt> <userinput>check-nvp-config &lt;path/to/nvp.ini&gt;</userinput></screen>
<screen><prompt>#</prompt> <userinput>neutron-check-nsx-config &lt;path/to/nsx.ini&gt;</userinput></screen>
<para>This command tests whether <systemitem
class="service"
>neutron-server</systemitem> can log into
all of the NVP Controllers and the SQL server,
all of the NSX Controllers and the SQL server,
and whether all UUID values are
correct.</para>
</note>
<section xml:id="LBaaS_and_FWaaS">
<title>Load Balancer-as-a-Service and
Firewall-as-a-Service</title>
<para>The NVP LBaaS and FWaaS services use the
<para>The NSX LBaaS and FWaaS services use the
standard OpenStack API with the exception of
requiring routed-insertion extension
support.</para>
<para>The main differences between the NVP
<para>The main differences between the NSX
implementation and the community reference
implementation of these services are:</para>
<orderedlist>
<listitem>
<para>The NVP LBaaS and FWaaS plug-ins
<para>The NSX LBaaS and FWaaS plug-ins
require the routed-insertion
extension, which adds the
<code>router_id</code> attribute to
@ -643,7 +644,7 @@ nvp_controllers=10.127.0.100,10.127.0.200:8888</programlisting>
implementation of LBaaS only supports
a one-arm model, which restricts the
VIP to be on the same subnet as the
back-end servers. The NVP LBaaS
back-end servers. The NSX LBaaS
plug-in only supports a two-arm model
between north-south traffic, which
means that you can create the VIP on
@ -654,7 +655,7 @@ nvp_controllers=10.127.0.100,10.127.0.200:8888</programlisting>
<para>The community reference
implementation of FWaaS applies
firewall rules to all logical routers
in a tenant, while the NVP FWaaS
in a tenant, while the NSX FWaaS
plug-in applies firewall rules only to
one logical router according to the
<code>router_id</code> of the
@ -664,29 +665,29 @@ nvp_controllers=10.127.0.100,10.127.0.200:8888</programlisting>
<procedure>
<title>To configure Load Balancer-as-a-Service
and Firewall-as-a-Service with
NVP:</title>
NSX:</title>
<step>
<para>Edit
<filename>/etc/neutron/neutron.conf</filename>
file:</para>
<programlisting language="ini">core_plugin = neutron.plugins.nicira.NeutronServicePlugin.NvpAdvancedPlugin
# Note: comment out service_plug-ins. LBaaS &amp; FWaaS is supported by core_plugin NvpAdvancedPlugin
<programlisting language="ini">core_plugin = neutron.plugins.vmware.NsxServicePlugin
# Note: comment out service_plug-ins. LBaaS &amp; FWaaS is supported by core_plugin NsxServicePlugin
# service_plugins = </programlisting>
</step>
<step>
<para>Edit
<filename>/etc/neutron/plugins/nicira/nvp.ini</filename>
<filename>/etc/neutron/plugins/vmware/nsx.ini</filename>
file:</para>
<para>In addition to the original NVP
<para>In addition to the original NSX
configuration, the
<code>default_l3_gw_service_uuid</code>
is required for the NVP Advanced
is required for the NSX Advanced
plug-in and you must add a <code>vcns</code>
section:</para>
<programlisting language="ini">[DEFAULT]
nvp_password = <replaceable>admin</replaceable>
nvp_user = <replaceable>admin</replaceable>
nvp_controllers = <replaceable>10.37.1.137:443</replaceable>
nsx_password = <replaceable>admin</replaceable>
nsx_user = <replaceable>admin</replaceable>
nsx_controllers = <replaceable>10.37.1.137:443</replaceable>
default_l3_gw_service_uuid = <replaceable>aae63e9b-2e4e-4efe-81a1-92cf32e308bf</replaceable>
default_tz_uuid = <replaceable>2702f27a-869a-49d1-8781-09331a0f6b9e</replaceable>
@ -701,7 +702,7 @@ nvp_controllers=10.127.0.100,10.127.0.200:8888</programlisting>
# VSM admin password
password = <replaceable>default</replaceable>
# UUID of a logical switch on NVP which has physical network connectivity (currently using bridge transport type)
# UUID of a logical switch on NSX which has physical network connectivity (currently using bridge transport type)
external_network = <replaceable>f2c023cf-76e2-4625-869b-d0dabcfcc638</replaceable>
# ID of deployment_container on VSM. Optional, if not specified, a default global deployment container is used
@ -920,24 +921,24 @@ password = "PLUMgrid-director-admin-password"</programlisting>
</step>
</procedure>
</section>
<section xml:id="install_neutron_agent_nvp">
<title>Node set up: Nicira NVP plug-in</title>
<para>If you use the Nicira NVP plug-in, you must also
<section xml:id="install_neutron_agent_nsx">
<title>Node set up: NSX plug-in</title>
<para>If you use the NSX plug-in, you must also
install Open vSwitch on each data-forwarding node.
However, you do not need to install an additional
agent on each node.</para>
<warning>
<para>It is critical that you are running an Open
vSwitch version that is compatible with the
current version of the NVP Controller
current version of the NSX Controller
software. Do not use the Open vSwitch version
that is installed by default on Ubuntu.
Instead, use the Open Vswitch version that is
provided on the Nicira support portal for your
NVP Controller version.</para>
provided on the VMware support portal for your
NSX Controller version.</para>
</warning>
<procedure>
<title>To set up each node for the Nicira NVP
<title>To set up each node for the NSX
plug-in</title>
<step>
<para>Ensure that each data-forwarding node has an
@ -945,29 +946,29 @@ password = "PLUMgrid-director-admin-password"</programlisting>
and an IP address on the "data network"
that is used for tunneling data traffic.
For full details on configuring your
forwarding node, see the <citetitle>NVP
forwarding node, see the <citetitle>NSX
Administrator
Guide</citetitle>.</para>
</step>
<step>
<para>Use the <citetitle>NVP Administrator
<para>Use the <citetitle>NSX Administrator
Guide</citetitle> to add the node as a
Hypervisor by using the NVP Manager GUI.
Hypervisor by using the NSX Manager GUI.
Even if your forwarding node has no VMs
and is only used for services agents like
<systemitem>neutron-dhcp-agent</systemitem>
or
<systemitem>neutron-lbaas-agent</systemitem>,
it should still be added to NVP as a
it should still be added to NSX as a
Hypervisor.</para>
</step>
<step>
<para>After following the <citetitle>NVP
<para>After following the <citetitle>NSX
Administrator Guide</citetitle>, use
the page for this Hypervisor in the NVP
the page for this Hypervisor in the NSX
Manager GUI to confirm that the node is
properly connected to the NVP Controller
Cluster and that the NVP Controller
properly connected to the NSX Controller
Cluster and that the NSX Controller
Cluster can see the
<literal>br-int</literal> integration
bridge.</para>
@ -1075,11 +1076,11 @@ enable_isolated_metadata = True
use_namespaces = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
</section>
<section xml:id="dhcp_agent_nvp">
<title>DHCP agent setup: NVP plug-in</title>
<section xml:id="dhcp_agent_nsx">
<title>DHCP agent setup: NSX plug-in</title>
<para>These DHCP agent options are required in the
<filename>/etc/neutron/dhcp_agent.ini</filename>
file for the NVP plug-in:</para>
file for the NSX plug-in:</para>
<programlisting language="bash">[DEFAULT]
ovs_use_veth = True
enable_metadata_network = True
@ -1110,7 +1111,7 @@ interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlist
plug-ins already have built-in L3 capabilities:</para>
<itemizedlist>
<listitem>
<para>Nicira NVP plug-in</para>
<para>NSX plug-in</para>
</listitem>
<listitem>
<para>Big Switch/Floodlight plug-in, which
@ -1231,7 +1232,7 @@ interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlist
<itemizedlist>
<listitem>
<para>An OVS-based plug-in such as OVS,
NVP, Ryu, NEC,
NSX, Ryu, NEC,
BigSwitch/Floodlight:</para>
<programlisting language="ini">interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
</listitem>

View File

@ -141,7 +141,7 @@ quota_security_group_rule = 100</programlisting>
+-------------+------------------------------------------------------------+</computeroutput></screen>
<note>
<para>Only some plug-ins support per-tenant quotas.
Specifically, Open vSwitch, Linux Bridge, and Nicira NVP
Specifically, Open vSwitch, Linux Bridge, and VMware NSX
support them, but new versions of other plug-ins might
bring additional functionality. See the documentation for
each plug-in.</para>

View File

@ -24,9 +24,9 @@
addressing. These plug-ins and agents differ depending on the
vendor and technologies used in the particular cloud.
OpenStack Networking ships with plug-ins and agents for Cisco
virtual and physical switches, Nicira NVP product, NEC
OpenFlow products, Open vSwitch, Linux bridging, and the Ryu
Network Operating System.</para>
virtual and physical switches, NEC OpenFlow products, Open
vSwitch, Linux bridging, Ryu Network Operating System, and
the VMware NSX product.</para>
<para>The common agents are L3 (layer 3), DHCP (dynamic host IP
addressing), and a plug-in agent.</para>
</listitem>

View File

@ -5,7 +5,7 @@
repository -->
<para xmlns="http://docbook.org/ns/docbook" version="5.0">
<table rules="all">
<caption>Description of configuration options for nicira</caption>
<caption>Description of configuration options for VMware NSX</caption>
<col width="50%"/>
<col width="50%"/>
<thead>

View File

@ -91,10 +91,10 @@
href="../../common/tables/neutron-nec.xml"
/>
</section>
<section xml:id="networking-plugin-nicira">
<title>Nicira NVP configuration options</title>
<section xml:id="networking-plugin-vmware">
<title>VMware NSX configuration options</title>
<xi:include
href="../../common/tables/neutron-nicira.xml"
href="../../common/tables/neutron-vmware.xml"
/>
</section>
<section xml:id="networking-plugin-openvswitch">

View File

@ -3123,13 +3123,6 @@ Each entry in a typical ACL specifies a subject and an operation. For instance,
Compute.</para>
</glossdef>
</glossentry>
<glossentry>
<glossterm>Nicira NVP neutron plug-in</glossterm>
<glossdef>
<para>Provides support for the Nicira Network
Virtualization Platform (NVP) in Networking.</para>
</glossdef>
</glossentry>
<glossentry>
<glossterm>No ACK</glossterm>
<glossdef>
@ -4829,6 +4822,12 @@ Each entry in a typical ACL specifies a subject and an operation. For instance,
Compute.</para>
</glossdef>
</glossentry>
<glossentry>
<glossterm>VMware NSX Neutron plugin</glossterm>
<glossdef>
<para>Provides support for VMware NSX in Neutron.</para>
</glossdef>
</glossentry>
<glossentry>
<glossterm>VNC proxy</glossterm>
<glossdef>

View File

@ -60,8 +60,8 @@
<para>Gregg Tally is the Chief Engineer at JHU/APL's Cyber Systems Group within the Asymmetric Operations Department. He works primarily in systems security engineering. Previously, he has worked at SPARTA, McAfee, and Trusted Information Systems where he was involved in cyber security research projects.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Eric Lopez</emphasis>, Nicira / VMware</para>
<para>Eric Lopez is Senior Solution Architect at VMware's Networking and Security Business Unit where he helps customer implement OpenStack and Nicira's Network Virtualization Platform. Prior to joining Nicira, he worked for Q1 Labs, Symantec, Vontu, and Brightmail. He has a B.S in Electrical Engineering/Computer Science and Nuclear Engineering from U.C. Berkeley and MBA from the University of San Francisco.</para>
<para><emphasis role="bold">Eric Lopez</emphasis>, VMware</para>
<para>Eric Lopez is Senior Solution Architect at VMware's Networking and Security Business Unit where he helps customers implement OpenStack and VMware NSX (formerly known as Nicira's Network Virtualization Platform). Prior to joining VMware (through the company's acquisition of Nicira), he worked for Q1 Labs, Symantec, Vontu, and Brightmail. He has a B.S in Electrical Engineering/Computer Science and Nuclear Engineering from U.C. Berkeley and MBA from the University of San Francisco.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Shawn Wells</emphasis>, Red Hat</para>

View File

@ -68,7 +68,7 @@
<section xml:id="ch032_networking-best-practices-idp74544">
<title>Network Services Extensions</title>
<para>Here is a list of known plugins provided by the open source community or by SDN companies that work with OpenStack Networking:</para>
<para>Big Switch Controller Plugin, Brocade Neutron Plugin Brocade Neutron Plugin, Cisco UCS/Nexus Plugin, Cloudbase Hyper-V Plugin, Extreme Networks Plugin, Juniper Networks Neutron Plugin, Linux Bridge Plugin, Mellanox Neutron Plugin, MidoNet Plugin, NEC OpenFlow Plugin, Nicira Network Virtualization Platform (NVP) Plugin, Open vSwitch Plugin, PLUMgrid Plugin, Ruijie Networks Plugin, Ryu OpenFlow Controller Plugin</para>
<para>Big Switch Controller Plugin, Brocade Neutron Plugin Brocade Neutron Plugin, Cisco UCS/Nexus Plugin, Cloudbase Hyper-V Plugin, Extreme Networks Plugin, Juniper Networks Neutron Plugin, Linux Bridge Plugin, Mellanox Neutron Plugin, MidoNet Plugin, NEC OpenFlow Plugin, Open vSwitch Plugin, PLUMgrid Plugin, Ruijie Networks Plugin, Ryu OpenFlow Controller Plugin, VMware NSX plugin</para>
<para>For a more detailed comparison of all features provided by plugins as of the Folsom release, see <link xlink:href="http://www.sebastien-han.fr/blog/2012/09/28/quantum-plugin-comparison/">Sebastien Han's comparison</link>.</para>
</section>
<section xml:id="ch032_networking-best-practices-idp78032">

View File

@ -321,9 +321,9 @@
or subnets and IP addressing. These plugins and agents
differ depending on the vendor and technologies used in
the particular cloud. Quantum ships with plugins and
agents for: Cisco virtual and physical switches, Nicira
NVP product, NEC OpenFlow products, Openvswitch, Linux
bridging and the Ryu Network Operating System.</para>
agents for: Cisco virtual and physical switches, NEC
OpenFlow products, Openvswitch, Linux bridging, the Ryu
Network Operating System, and VMware NSX.</para>
</listitem>
<listitem>
<para>The common agents are L3 (layer 3), DHCP (dynamic host

View File

@ -66,8 +66,15 @@
<para>The current set of plugins include:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Open vSwitch:</emphasis>
Documentation included in this guide.</para>
<para><emphasis role="bold">Big Switch, Floodlight REST
Proxy:</emphasis>
<link
xlink:href="http://www.openflowhub.org/display/floodlightcontroller/Quantum+REST+Proxy+Plugin"
>http://www.openflowhub.org/display/floodlightcontroller/Quantum+REST+Proxy+Plugin</link></para>
</listitem>
<listitem>
<para><emphasis role="bold">Brocade
Plugin</emphasis></para>
</listitem>
<listitem>
<para><emphasis role="bold">Cisco:</emphasis> Documented
@ -75,6 +82,10 @@
xlink:href="http://wiki.openstack.org/cisco-quantum"
>http://wiki.openstack.org/cisco-quantum</link></para>
</listitem>
<listitem>
<para><emphasis role="bold">Hyper-V
Plugin</emphasis></para>
</listitem>
<listitem>
<para><emphasis role="bold">Linux Bridge:</emphasis>
Documentation included in this guide and <link
@ -83,18 +94,8 @@
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Nicira NVP:</emphasis>
Documentation include in this guide, <link
xlink:href="http://www.vmware.com/products/datacenter-virtualization/nicira.html"
>NVP Product Overview </link>, and <link
xlink:href="http://www.nicira.com/support">NVP
Product Support</link>.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Ryu:</emphasis>
<link
xlink:href="https://github.com/osrg/ryu/wiki/OpenStack"
>https://github.com/osrg/ryu/wiki/OpenStack</link></para>
<para><emphasis role="bold">Midonet
Plugin</emphasis></para>
</listitem>
<listitem>
<para><emphasis role="bold">NEC OpenFlow:</emphasis>
@ -103,11 +104,8 @@
>http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin</link></para>
</listitem>
<listitem>
<para><emphasis role="bold">Big Switch, Floodlight REST
Proxy:</emphasis>
<link
xlink:href="http://www.openflowhub.org/display/floodlightcontroller/Quantum+REST+Proxy+Plugin"
>http://www.openflowhub.org/display/floodlightcontroller/Quantum+REST+Proxy+Plugin</link></para>
<para><emphasis role="bold">Open vSwitch:</emphasis>
Documentation included in this guide.</para>
</listitem>
<listitem>
<para><emphasis role="bold">PLUMgrid:</emphasis>
@ -116,16 +114,19 @@
>https://wiki.openstack.org/wiki/Plumgrid-quantum</link></para>
</listitem>
<listitem>
<para><emphasis role="bold">Hyper-V
Plugin</emphasis></para>
<para><emphasis role="bold">Ryu:</emphasis>
<link
xlink:href="https://github.com/osrg/ryu/wiki/OpenStack"
>https://github.com/osrg/ryu/wiki/OpenStack</link></para>
</listitem>
<listitem>
<para><emphasis role="bold">Brocade
Plugin</emphasis></para>
</listitem>
<listitem>
<para><emphasis role="bold">Midonet
Plugin</emphasis></para>
<!-- TODO: Update support link, when available -->
<para><emphasis role="bold">VMware NSX:</emphasis>
Documentation include in this guide, <link
xlink:href="http://www.vmware.com/products/nsx"
>NSX Product Overview </link>, and <link
xlink:href="http://www.nicira.com/support">NSX
Product Support</link>.</para>
</listitem>
</itemizedlist>
<para>Plugins can have different properties in terms of hardware

View File

@ -7,13 +7,13 @@ admin_user common
admin_username embrane
agent_boot_time ml2_l2pop
agent_down_time agent
agent_mode nicira
agent_mode vmware
allow_bulk api
allow_overlapping_ips policy
allow_pagination api
allow_sorting api
allowed_rpc_exception_modules common
always_read_status nicira
always_read_status vmware
amqp_auto_delete rpc
amqp_durable_queues rpc
api_extensions_path api
@ -33,7 +33,7 @@ bind_host common
bind_port common
bridge_mappings openvswitch_agent
cert_file nec
concurrent_connections nicira
concurrent_connections vmware
config_base_dir vpn
connection db
connection_debug db
@ -41,14 +41,14 @@ connection_trace db
control_exchange rpc
core_plugin common
daemon_endpoint mlnx
datacenter_moid nicira
datastore_id nicira
datacenter_moid vmware
datastore_id vmware
debug logging
default_flavor meta
default_interface_name nicira
default_l2_gw_service_uuid nicira
default_interface_name vmware
default_l2_gw_service_uuid vmware
default_l3_flavor meta
default_l3_gw_service_uuid nicira
default_l3_gw_service_uuid vmware
default_log_levels logging
default_network_profile cisco
default_notification_level notifier
@ -56,10 +56,10 @@ default_policy_profile cisco
default_publisher_id notifier
default_quota quotas
default_router_provider nec
default_service_cluster_uuid nicira
default_transport_type nicira
default_tz_uuid nicira
deployment_container_id nicira
default_service_cluster_uuid vmware
default_transport_type vmware
default_tz_uuid vmware
deployment_container_id vmware
dhcp_agent_notification common
dhcp_agents_per_network db
dhcp_lease_duration common
@ -80,7 +80,7 @@ enable_vxlan linuxbridge_agent
enabled fwaas
esm_mgmt embrane
extension_map meta
external_network nicira
external_network vmware
external_pids agent
fake_rabbit testing
fatal_deprecations logging
@ -92,7 +92,7 @@ host cisco
host common
host nec
host rpc
http_timeout nicira
http_timeout vmware
idle_timeout db
inband_id embrane
instance_format logging
@ -132,29 +132,29 @@ logging_default_format_string logging
logging_exception_prefix logging
mac_generation_retries common
managed_physical_network ml2_cisco
manager_uri nicira
manager_uri vmware
matchmaker_heartbeat_freq rpc
matchmaker_heartbeat_ttl rpc
max_dns_nameservers common
max_fixed_ips_per_port common
max_lp_per_bridged_ls nicira
max_lp_per_overlay_ls nicira
max_lp_per_bridged_ls vmware
max_lp_per_overlay_ls vmware
max_overflow db
max_pool_size db
max_random_sync_delay nicira
max_random_sync_delay vmware
max_retries db
max_router_rules bigswitch
max_routes quotas
max_subnet_host_routes common
mechanism_drivers ml2
meta_flavor_driver_mappings common
metadata_mode nicira
metadata_mode vmware
mgmt_id embrane
midonet_host_uuid_path midonet
midonet_uri midonet
min_chunk_size nicira
min_chunk_size vmware
min_pool_size db
min_sync_req_delay nicira
min_sync_req_delay vmware
minimize_polling openvswitch_agent
mode midonet
model_class cisco
@ -181,10 +181,10 @@ node_override_vif_ovs bigswitch
node_override_vif_unbound bigswitch
notification_driver notifier
notification_topics notifier
nsx_controllers nicira
nsx_gen_timeout nicira
nsx_password nicira
nsx_user nicira
nsx_controllers vmware
nsx_gen_timeout vmware
nsx_password vmware
nsx_user vmware
oob_id embrane
openflow_rest_api ryu
ostype brocade
@ -253,14 +253,14 @@ rabbit_retry_interval rabbitmq
rabbit_use_ssl rabbitmq
rabbit_userid rabbitmq
rabbit_virtual_host rabbitmq
redirects nicira
redirects vmware
region_name ml2_arista
report_interval agent
req_timeout nicira
req_timeout vmware
request_timeout mlnx
resource_pool_id embrane
resource_pool_id nicira
retries nicira
resource_pool_id vmware
retries vmware
retry_interval db
retry_until_window wsgi
ringfile rpc
@ -307,13 +307,13 @@ ssl_ca_file ssl
ssl_cert_file ssl
ssl_key_file ssl
state_path common
state_sync_interval nicira
state_sync_interval vmware
supported_extension_aliases meta
svi_round_robin cisco
sync_data bigswitch
sync_interval ml2_arista
syslog_log_facility logging
task_status_check_interval nicira
task_status_check_interval vmware
tcp_keepidle wsgi
tenant_default_router_rule bigswitch
tenant_network_type hyperv
@ -342,7 +342,7 @@ use_ssl ssl
use_stderr logging
use_syslog logging
use_tpool db
user nicira
user vmware
user_group lbaas
username brocade
username midonet