Fix errors and style in the guide

Change-Id: I57f1c39dc0a0a5542c64228dd0779d5f45a179b7
This commit is contained in:
Evgeny Konstantinov 2016-07-18 15:19:49 +03:00
parent 615c977151
commit a08160d729
10 changed files with 297 additions and 290 deletions

View File

@ -3,30 +3,31 @@
Welcome to Fuel NSXv plugin's documentation!
============================================
Fuel NSXv plugin allows you to deploy OpenStack cluster which can use
pre-existing vSphere infrastructure with NSX network virtualization platform.
The Fuel NSXv plugin allows you to deploy an OpenStack cluster which can use
a pre-existing vSphere infrastructure with the NSX network virtualization
platform.
Plugin installs Neutron NSX core plugin and allows logical network equipment
(routers, networks) to be created as NSX entities.
The plugin installs the Neutron NSX core plugin and allows logical network
equipment (routers, networks) to be created as NSX entities.
Plugin can work with VMware NSX 6.1.3, 6.1.4, 6.2.1.
The plugin supports VMware NSX 6.1.3, 6.1.4, 6.2.1.
Plugin versions:
* 3.x.x series is compatible with Fuel 9.0. Tests were performed on plugin v3.0 with
VMware NSX 6.2.0 and vCenter 5.5.
* 3.x.x series is compatible with Fuel 9.0. Tests were performed on the plugin
v3.0 with VMware NSX 6.2.0 and vCenter 5.5.
* 2.x.x series is compatible with Fuel 8.0. Tests were performed on plugin v2.0 with
VMware NSX 6.2.0 and vCenter 5.5.
* 2.x.x series is compatible with Fuel 8.0. Tests were performed on the plugin
v2.0 with VMware NSX 6.2.0 and vCenter 5.5.
* 1.x.x series is compatible with Fuel 7.0. Tests were performed on plugin v1.2 with
VMware NSX 6.1.4 and vCenter 5.5.
* 1.x.x series is compatible with Fuel 7.0. Tests were performed on the plugin
v1.2 with VMware NSX 6.1.4 and vCenter 5.5.
Through documentation we use terms "NSX" and "NSXv" interchangeably, both of
This documentation uses the terms "NSX" and "NSXv" interchangeably; both of
these terms refer to `VMware NSX virtualized network platform
<https://www.vmware.com/products/nsx>`_.
Pre-built package of the plugin you can find in
The pre-built package of the plugin is in
`Fuel Plugin Catalog <https://www.mirantis.com/products/openstack-drivers-and-plugins/fuel-plugins>`_.
Documentation contents

View File

@ -1,13 +1,13 @@
How to build the plugin from source
===================================
To build the plugin you first need to install fuel-plugin-builder_ 4.1.0
To build the plugin, you first need to install fuel-plugin-builder_ 4.1.0
.. code-block:: bash
$ pip install fuel-plugin-builder==4.1.0
After that you can build the plugin:
Build the plugin:
.. code-block:: bash
@ -15,28 +15,28 @@ After that you can build the plugin:
$ cd fuel-plugin-nsxv/
librarian-puppet_ ruby package is required to be installed. It is used to fetch
upstream fuel-library_ puppet modules that plugin use. It can be installed via
*gem* package manager:
The librarian-puppet_ ruby package is required to be installed. It is used to fetch
upstream fuel-library_ puppet modules that the plugin uses. It can be installed via
the *gem* package manager:
.. code-block:: bash
$ gem install librarian-puppet
or if you are using ubuntu linux - you can install it from the repository:
or if you are using ubuntu linux, you can install it from the repository:
.. code-block:: bash
$ apt-get install librarian-puppet
and build plugin:
and build the plugin:
.. code-block:: bash
$ fpb --build .
fuel-plugin-builder will produce .rpm package of the plugin which you need to
upload to Fuel master node:
fuel-plugin-builder will produce an .rpm package of the plugin which you need to
upload to the Fuel master node:
.. code-block:: bash

View File

@ -1,82 +1,85 @@
Configuration
=============
Switch to Networks tab of the Fuel web UI and click on *Settings*/*Other*
section, the plugin checkbox enabled by default. For reasons of clarity not all
settings are shown on the screenshot below:
Switch to the :guilabel:`Networks` tab of the Fuel web UI and click the
:guilabel:`Settings`/`Other` section. The plugin checkbox is enabled
by default. The screenshot below shows only the settings in focus:
.. image:: /image/nsxv-settings-filled.png
:scale: 60 %
Several plugin input fields refer to MoRef ID (Managed Object Reference ID),
these object IDs can be obtained via Managed Object Browser which is located on
Several plugin input fields refer to MoRef ID (Managed Object Reference ID);
these object IDs can be obtained using Managed Object Browser, which is located on
the vCenter host, e.g. https://<vcenter_host>/mob
Starting from Fuel 9.0 settings on web UI are not disabled and it is possible
to run deployment process with changed settings against working cluster. This
change also impacts plugin settings which means that plugin settings can be
changed and applied to Neutron. From plugin perspective it is not possible to
disable specific input fields, below settings that break Neutron operations are
commented.
Starting from Fuel 9.0, the settings on the Fuel web UI are not disabled
and it is possible to run the deployment process with the changed settings
against a working cluster.
This change also impacts the plugin settings as they can be changed and
applied to Neutron. From the plugin perspective, it is not possible to
disable specific input fields, the settings below that break Neutron
operations are commented.
Plugin contains the following settings:
The plugin contains the following settings:
#. NSX Manager hostname (or IP) -- if you are going to use hostname in this
textbox be sure that your OpenStack controller will be able to resolve it.
Add necessary DNS servers in *Host OS DNS Servers* section. NSX Manager
must be connected to vCenter server which you specified on VMware tab.
textbox, ensure that your OpenStack controller can resolve the hostname.
Add necessary DNS servers in the :guilabel:`Host OS DNS Servers` section.
NSX Manager must be connected to the vCenter server specified on
the VMware tab.
OpenStack Controller must have L3 connectivity with NSX Manager through
Public network.
the Public network.
#. NSX Manager username.
.. note::
In order for Neutron NSX plugin to operate properly account that it uses
must have Enterprise administrator role.
For the Neutron NSX plugin to operate properly, the account in use
must have an Enterprise administrator role.
#. NSX Manager password.
#. Bypass NSX Manager certificate verification -- if enabled then HTTPS
connection will not be verified. Otherwise two options are available:
#. Bypass NSX Manager certificate verification -- if enabled, the HTTPS
connection is not verified. Otherwise, the two following options are
available:
* setting "CA certificate file" appear below making it possible to upload CA
certificate that issued NSX Manager certificate.
* The setting "CA certificate file" appears below making it possible to
upload a CA certificate that issued the NSX Manager certificate.
* no CA certificate provided, then NSX Manager certificate will be verified
against CA certificate bundle that comes by default within OpenStack
controller node operating system.
* With no CA certificate provided, the NSX Manager certificate is verified
against the CA certificate bundle that comes by default within the
OpenStack controller node operating system.
#. CA certificate file -- file in PEM format that contains bundle of CA
certificates which will be used by the plugin during NSX Manager certificate
verification. If no file is present, then HTTPS connection will not be
#. CA certificate file -- a file in PEM format that contains a bundle of CA
certificates used by the plugin during the NSX Manager certificate
verification. If no file is present, the HTTPS connection is not
verified.
#. Datacenter MoRef ID -- ID of Datacenter where NSX Edge nodes will be
deployed.
#. Datacenter MoRef ID -- an ID of the Datacenter where the NSX Edge nodes
are deployed.
#. Resource pool MoRef ID -- resource pool for NSX Edge nodes deployment.
Setting change on deployed cluster affects only new Edges.
#. Resource pool MoRef ID -- a resource pool for the NSX Edge nodes deployment.
Changing this setting on a deployed cluster affects only the new Edges.
#. Datastore MoRef ID -- datastore for NSX Edge nodes. Change of datastore
setting on deployed cluster affects only new Edges.
#. Datastore MoRef ID -- a datastore for NSX Edge nodes. A change of the datastore
setting on the deployed cluster affects only the new Edges.
#. External portgroup MoRef ID -- portgroup through which NSX Edge nodes get
connectivity with physical network.
#. External portgroup MoRef ID -- a portgroup through which the NSX Edge nodes get
connectivity with the physical network.
#. Transport zone MoRef ID -- transport zone for VXLAN logical networks.
#. Transport zone MoRef ID -- a transport zone for VXLAN logical networks.
.. note::
This ID can be fetched using NSX Manager API
https://<nsx_manager_host>/api/2.0/vdn/scopes
#. Distributed virtual switch MoRef ID -- ID of vSwitch connected to Edge
#. Distributed virtual switch MoRef ID -- ID of vSwitch connected to the Edge
cluster.
#. NSX backup Edge pool -- size of NSX Edge nodes and size of Edge pool, value
must follow format: <edge_type>:[edge_size]:<min_edges>:<max_edges>.
#. NSX backup Edge pool -- the size of the NSX Edge nodes and the size of Edge
pool. The value must be in the format: <edge_type>:[edge_size]:<min_edges>:<max_edges>.
**edge_type** can take the following values: *service* or *vdr* (service and
distributed edge, respectively).
@ -86,13 +89,13 @@ Plugin contains the following settings:
NSX *vdr* nodes performs distributed routing and bridging.
**edge_size** can take following values: *compact*, *large* (default value if
omitted), *xlarge*, *quadlarge*.
**edge_size** can take the following values: *compact*, *large* (the default
value if omitted), *xlarge*, *quadlarge*.
**min_edges** and **max_edges** defines minimum and maximum amount of NSX
**min_edges** and **max_edges** define the minimum and maximum amount of NSX
Edge nodes in pool.
Consider following table that describes NSX Edge types:
The following table describes the NSX Edge types:
========= ===================
Edge size Edge VM parameters
@ -103,104 +106,108 @@ Plugin contains the following settings:
xlarge 6 vCPU 8192 MB vRAM
========= ===================
Consider following example values:
Example values:
``service:compact:1:2,vdr:compact:1:3``
``service:xlarge:2:6,service:large:4:10,vdr:large:2:4``
#. Enable HA for NSX Edges -- if you enable this option NSX Edges will be
#. Enable HA for NSX Edges -- if you enable this option, the NSX Edges will be
deployed in active/standby pair on different ESXi hosts.
Setting change on deployed cluster affects only new Edges.
Changing this setting on a deployed cluster affects only the new Edges.
#. Init metadata infrastructure -- If enabled, instance will attempt to
initialize the metadata infrastructure to access to metadata proxy service,
otherwise metadata proxy will not be deployed.
#. Init metadata infrastructure -- if enabled, the instance attempts to
initialize the metadata infrastructure to access to metadata proxy service;
otherwise, the metadata proxy is not deployed.
#. Bypass metadata service certificate verification -- If enabled connection
metadata service will be listening HTTP port. Otherwise self-signed
certificate will be generated, installed into edge nodes and
nova-api-metadata, HTTPS will be enabled.
#. Bypass metadata service certificate verification -- if enabled, the connection
metadata service listens the HTTP port. Otherwise, a self-signed
certificate is generated, installed into the Edge nodes, and
nova-api-metadata; HTTPS is enabled.
#. Which network will be used to access the nova-metadata -- select network
through which nova-api-metadata service will be available for NSX edge
nodes. Currently two options are available *Public* and *Management*
#. Which network will be used to access the nova-metadata -- select a network
through which the nova-api-metadata service will be available for the
NSX Edge nodes. Currently two options are available the *Public* and *Management*
networks.
If *Management* network selected, then free IP address from management
network range for nova-api-metadata will be allocated automatically and
you don't need to specify your own IP address, netmask, gateway.
If the *Management* network is selected, then the free IP address from the
management network range for nova-api-metadata is allocated automatically;
you do not need to specify your own IP address, netmask, gateway.
If *Public* network selected, then you need to specify you own IP address, netmask
and gateway. See metadata related settings below.
If the *Public* network is selected, then you need to specify you own IP
address, netmask, and gateway. See the metadata related settings below.
.. warning::
Do not change metadata settings after cluster is deployed!
Do not change the metadata settings after the cluster is deployed.
To enable Nova metadata service, the following settings must be set:
To enable the Nova metadata service, the following settings must be set:
#. Metadata allowed ports -- comma separated list of TCP port allowed access to
the metadata proxy, in addition to 80, 443 and 8775.
#. Metadata allowed ports -- a comma-separated list of TCP ports allowed access
to the metadata proxy in addition to 80, 443 and 8775.
#. Metadata portgroup MoRef ID -- portgroup MoRef ID for metadata proxy service.
#. Metadata proxy IP addresses -- comma separated IP addresses used by Nova
metadata proxy service.
#. Management network netmask -- management network netmask for metadata proxy
#. Metadata portgroup MoRef ID -- a portgroup MoRef ID for the metadata proxy
service.
#. Management network default gateway -- management network gateway for
#. Metadata proxy IP addresses -- comma-separated IP addresses used by Nova
metadata proxy service.
#. Floating IP ranges -- dash separated IP addresses allocation pool from
#. Management network netmask -- management network netmask for the metadata
proxy service.
#. Management network default gateway -- management network gateway for
the metadata proxy service.
#. Floating IP ranges -- dash-separated IP addresses allocation pool from
external network, e.g. "192.168.30.1-192.168.30.200".
#. External network CIDR -- network in CIDR notation that includes floating IP ranges.
#. Gateway -- default gateway for external network, if not defined, first IP address
of the network is used.
#. Gateway -- default gateway for the external network; if not defined, the
first IP address of the network is used.
#. Internal network CIDR -- network in CIDR notation for use as internal.
#. DNS for internal network -- comma separated IP addresses of DNS server for
#. DNS for internal network -- comma-separated IP addresses of DNS server for
internal network.
If you tick *Additional settings* checkbox following options will become
available for configuration:
If you tick the :guilabel:`Additional settings` checkbox, the following
options will become available for configuration:
#. Instance name servers -- comma separated IP addresses of name servers that
will be passed to instance.
#. Instance name servers -- comma-separated IP addresses of the name servers
that are passed to the instance.
#. Task status check interval -- asynchronous task status check interval,
default is 2000 (millisecond).
the default value is 2000 (millisecond).
#. Maximum tunnels per vnic -- specify maximum amount of tunnels per vnic,
possible range of values 1-110 (20 is used if no other value is provided).
#. Maximum tunnels per vnic -- specify the maximum amount of tunnels per vnic;
the possible range of values is 1-110 (20 is used if no other value is
provided).
#. API retries -- maximum number of API retries (10 by default).
#. Enable SpoofGuard -- option allows to control behaviour of port-security
feature that prevents traffic flow if IP address of VM that was reported by
VMware Tools does not match source IP address that is observed in outgoing
VM traffic (consider the case when VM was compromised).
#. Enable SpoofGuard -- the option allows to control the behaviour of
the port-security feature that prevents traffic flow if the IP address
of the VM that was reported by VMware Tools does not match the source IP
address that is observed in outgoing VM traffic (consider the case when
VM was compromised).
#. Tenant router types -- ordered list of preferred tenant router types (default
value is ``shared, distributed, exclusive``).
#. Tenant router types -- an ordered list of preferred tenant router types
(the default value is ``shared, distributed, exclusive``).
* shared -- multiple shared routers may own one edge VM.
* exclusive -- each router own one edge VM.
* distributed -- same as exclusive, but edge is created as distributed
logical router. VM traffic get routed via DLR kernel modules on each
* exclusive -- each router owns one edge VM.
* distributed -- same as exclusive, but edge is created as a distributed
logical router. The VM traffic is routed via DLR kernel modules on each
ESXi host.
#. Exclusive router size -- size of edge for exclusive router
(value must be one of *compact*, *large*, *quadlarge* or *xlarge*).
#. Exclusive router size -- the size of edge for the exclusive router
(the value must be one of *compact*, *large*, *quadlarge* or *xlarge*).
#. Edge user -- user that will be created on edge VMs for remote login.
#. Edge user -- the user that will be created on edge VMs for remote login.
#. Edge password -- password for edge VMs. It must match following rules:
#. Edge password -- password for edge VMs. The password must match
the following rules:
* not less 12 characters (max 255 chars)
* at least 1 upper case letter
@ -210,13 +217,13 @@ Plugin contains the following settings:
.. warning::
Plugin cannot verify that password conforms security policy. If you enter
password that does not match policy, Neutron server will be not able to
create routers and deployment process will stop, because NSX will not
permit creating edge nodes with password that does not match security
policy.
The plugin cannot verify that password conforms to the security policy.
If you enter the password that does not match the policy, Neutron server
will be not able to create routers and the deployment process will stop,
because NSX cannot permit creating edge nodes with a password that does
not match the security policy.
#. DHCP lease time -- DHCP lease time in seconds for VMs. Default value is
#. DHCP lease time -- DHCP lease time in seconds for VMs. The default value is
86400 (24 hours).
#. Coordinator URL -- URL for distributed locking coordinator.
#. Coordinator URL -- URL for the distributed locking coordinator.

View File

@ -4,33 +4,35 @@ OpenStack environment notes
Environment creation
--------------------
Before start actual deployment process please verify that you vSphere
infrastructure (vCenter and NSXv) is configured and functions properly,
Fuel NSXv plugin cannot deploy vSphere infrastructure, it must be up and
running before OpenStack deployment.
Before you start the actual deployment, verify that your vSphere
infrastructure (vCenter and NSXv) is configured and functions properly.
The Fuel NSXv plugin cannot deploy vSphere infrastructure; The
vSphere infrastructure must be up and running before the OpenStack deployment.
To use NSXv plugin create new OpenStack environment via the Fuel web UI follow
these steps:
To use the NSXv plugin, create a new OpenStack environment using the Fuel web
UI by doing the following:
#. On *Compute* configuration step tick 'vCenter' checkbox
#. On the :guilabel:`Compute` configuration step, tick the :guilabel:`vCenter`
checkbox:
.. image:: /image/wizard-step1.png
:scale: 70 %
#. After plugin gets installed it will be possible to use *Neutron with
NSXv plugin* at 'Networking Setup' step:
#. After the plugin installation, use :guilabel:`Neutron with
NSXv plugin` at the :guilabel:`Networking Setup` step:
.. image:: /image/wizard-step2.png
:scale: 70 %
#. Once you get environment created add one or more controller nodes.
#. Once you get the environment created, add one or more controller nodes.
Pay attention on which interface you assign *Public* network, OpenStack
controllers must have connectivity with NSX Manager host through *Public*
network since it is used as default route for packets.
Pay attention to on which interface you assign the *Public* network. The
OpenStack controllers must have connectivity with the NSX Manager host
through the *Public* network since the *Public* network is the default
route for packets.
During deployment process plugin creates simple network topology for admin
tenant. It creates provider network which connects tenants with transport
(physical) network, one internal network and router that is connected to both
networks.
During the deployment, the plugin creates a simple network topology for
the admin tenant. The plugin creates a provider network which connects the
tenants with the transport (physical) network: one internal network and
a router that is connected to both networks.

View File

@ -1,18 +1,18 @@
Installation
============
#. Download plugin .rpm package from the `Fuel plugin catalog`_.
#. Download the plugin .rpm package from the `Fuel plugin catalog`_.
#. Upload package to Fuel master node.
#. Upload the package to the Fuel master node.
#. Install the plugin with ``fuel`` command line tool:
#. Install the plugin with the ``fuel`` command-line tool:
.. code-block:: bash
[root@nailgun ~] fuel plugins --install nsxv-3.0-3.0.0-1.noarch.rpm
#. Verify that the plugin is installed successfully:
#. Verify that the plugin installation is successful:
.. code-block:: bash
@ -21,16 +21,16 @@ Installation
---|------|---------|----------------
1 | nsxv | 3.0.0 | 4.0.0
After installation plugin can be used for new OpenStack clusters, it is not
possible to enable plugin on deployed clusters.
After the installation, the plugin can be used on new OpenStack clusters;
you cannot enable the plugin on the deployed clusters.
Uninstallation
--------------
Before uninstalling plugin be sure that there no environments left that use the
plugin, otherwise it is not possible to uninstall it.
Before uninstalling the plugin, ensure there no environments left that use the
plugin, otherwise the uninstallation is not possible.
To uninstall plugin run following:
To uninstall the plugin, run following:
.. code-block:: bash

View File

@ -1,27 +1,29 @@
Known issues
============
Deployment process may fail when big amount of NSX edge nodes is specified in backup pool
-----------------------------------------------------------------------------------------
Deployment process may fail when a big amount of NSX edge nodes is specified in backup pool
-------------------------------------------------------------------------------------------
When specifying huge amount of edge nodes in *NSX backup Edge pool* setting
deployment process may fail, because Neutron NSX plugin tries to provision
When specifying a huge amount of edge nodes in the
:guilabel:`NSX backup Edge pool` setting, the deployment process may fail,
because the Neutron NSX plugin tries to provision a
specified amount of backup nodes while Neutron server waits until this
operation is finished. Default timeout for neutron-server start is about 7
minutes. If you encounter such behaviour wait until all backup edge nodes are
provisioned on vSphere side and rerun deployment process.
operation is finished. The default timeout for neutron-server start is about 7
minutes. If you encounter such behaviour, wait until all the backup edge nodes
are provisioned on the vSphere side and rerun the deployment process.
Change of ``admin_state_up`` does not affect actual port state
--------------------------------------------------------------
Changing ``admin_state_up`` does not affect actual port state
-------------------------------------------------------------
NSX plugin does not change *admin_state_up* of a port. Even if operator
executes ``neutron port-update`` command, port will remain in active state, but
will be reported as ``admin_state_up: False`` by ``neutron port-show`` command.
The NSX plugin does not change *admin_state_up* of a port. Even if the operator
executes the ``neutron port-update`` command, port will remain in active state,
but will be reported as ``admin_state_up: False`` by the ``neutron port-show``
command.
3 OSTF fails in configuration with ceilometer
3 OSTF fails in configuration with Ceilometer
---------------------------------------------
Test nsxv_ceilometer marked as Passed.
We don't have TestVM image due to disabled 'nova' AZ. So we wish to see these
tests with TestVM-VMDK image only.
See `LP1592357 <https://bugs.launchpad.net/fuel-plugin-nsxv/+bug/1592357>`_.
Test nsxv_ceilometer marked as *Passed*.
We do not have a TestVM image due to the disabled 'nova' AZ.
So we want to see these tests with the TestVM-VMDK image only.
See `LP1592357 <https://bugs.launchpad.net/fuel-plugin-nsxv/+bug/1592357>`_.

View File

@ -2,63 +2,65 @@ Limitations
===========
Vcenter cluster names must be unique within the data center
---------------------------------
-----------------------------------------------------------
vCenter inventory allows user to form hierarchy by organizing vSphere entities
into folders. Clusters by default are created on first level of hierarchy, then
they can be put into folders. Plugin supports the clusters that are located on
vCenter inventory allows the user to form a hierarchy by organizing
vSphere entities into folders. Clusters, by default, are created on the
first level of hierarchy, then the clusters can be put into folders.
The plugin supports the clusters that are located on
all levels of the hierarchy, thus cluster names must be unique.
Incompatible roles are explicitly hidden
----------------------------------------
Is is worth to mention that following roles gets disabled for OpenStack
environment with the plugin:
The following roles are disabled for an OpenStack environment with the plugin:
* Compute
* Ironic
* Cinder
Compute and Ironic are incompatible, because NSX v6.x switch is available
exclusively for ESXi, so it is not possible to pass traffic inside compute node
Compute and Ironic are incompatible, because an NSX v6.x switch is available
exclusively for ESXi; it is not possible to pass the traffic inside a compute node
that runs Linux and KVM. Cinder is disabled due to inability to pass LVM based
volumes via iSCSI to ESXi hosts, instead use "Cinder proxy for VMware
datastore" role.
volumes via iSCSI to ESXi hosts; instead, use the *Cinder proxy for VMware
datastore* role.
Public floating IP range is ignored
-----------------------------------
Fuel requires that floating IP range must be within *Public* IP range. This
requirement does not make sense with NSXv plugin, because edge nodes provide
connectivity for virtual machines, not controllers. Nevertheless floating IP
range for *Public* network must be assigned. Plugin provides it own field for
floating IP range.
Fuel requires that the floating IP range must be within the *Public* IP range.
This requirement does not make sense with the NSXv plugin, because edge nodes
provide connectivity for virtual machines, not controllers. Nevertheless,
the floating IP range for the *Public* network must be assigned. The plugin
provides its own field for the floating IP range.
.. image:: /image/floating-ip.png
:scale: 70 %
Pay attention that Neutron L2/L3 configuration on Settings tab does not have
effect in OpenStack cluster that uses NSXv. These settings contain settings
Pay attention to that the Neutron L2/L3 configuration on the
:guilabel:`Settings` tab does not have an effect in the OpenStack cluster
that uses NSXv. These settings contain the settings
for GRE tunneling which does not have an effect with NSXv.
Private network is not used
---------------------------
It does not matter on which network interface you assign *Private* network
traffic, because it does not flow through controllers. Nevertheless IP range
for *Private* network must be assigned.
It does not matter on which network interface you assign the *Private* network
traffic, because it does not flow through controllers. Nevertheless, an IP range
for the *Private* network must be assigned.
OpenStack environment reset/deletion
------------------------------------
Fuel NSXv plugin does not provide cleanup mechanism when OpenStack environment
gets reset or deleted. All logical switches and edge virtual machines remain
intact, it is up to operator to delete them and free resources.
The Fuel NSXv plugin does not provide a cleanup mechanism when an OpenStack
environment is reset or deleted. All logical switches and the edge virtual
machines remain intact, it is up to the operator to delete them and free
resources.
Ceph block storage is not supported
-----------------------------------
ESXi hypervisor do not have native support for mounting Ceph.
ESXi hypervisor does not have native support for mounting Ceph.
Sahara support
--------------

View File

@ -4,35 +4,35 @@ Release notes
Release notes for Fuel NSXv plugin 3.0.0:
* Plugin is compatible with Fuel 9.0.
* Plugin settings were moved to Networks tab.
* Roles that are not applicable to environment with the plugin are hidden.
* Plugin settings were moved to the Networks tab.
* Roles that are not applicable to the environment with the plugin are hidden.
* Novas timeout of HTTP requests to Neutron was increased up to 900 seconds.
On big amount of requests neutron may be busy for a long period of time.
On a big amount of requests, neutron may be busy for a long period of time.
* User can assign nova-api-metadata to listen OpenStack public or management
network.
* LBaaS v2 support gets configured by default.
* LBaaS v2 support is configured by default.
* Troubleshooting section was added to plugin guide.
* The plugin supports searching vcenter clusters name across a given data center
* The plugin supports searching vCenter clusters name across a given data center
hierarchy.
* Add new parameters that were added to Neutron NSX plugin during Mitaka release.
* Added new parameters that were added to the Neutron NSX plugin during Mitaka release.
Release notes for Fuel NSXv plugin 2.0.0:
* Plugin is compatible with Fuel 8.0.
* Support for Neutron server Liberty release.
* Add new parameters that were added to Neutron NSX plugin during Liberty release.
* Added new parameters that were added to Neutron NSX plugin during Liberty release.
* Support of Fuel `component registry feature
<https://blueprints.launchpad.net/fuel/+spec/component-registry>`_.
Plugin is shown as separate item at network step of cluster creation
wizard.
* Plugin no longer ships customized python-nova package. All needed
functionality for NSX support is available in python-nova Liberty package.
* Plugin installation process takes less time, because it does not need restart
* Plugin installation process takes less time, because it does not need to restart
docker containers.
* Setting 'Cluster ModRef IDs for OpenStack VMs' was removed.
* Setting 'Cluster MoRef IDs for OpenStack VMs' was removed.
Plugin automatically fetches cluster names that present on VMware tab and
queries vCenter to get MoRef ID. When new compute-vmware node is added and
vSphere clusters gets assigned to it, plugin updates Neutron configuration
queries vCenter to get MoRef ID. When a new compute-vmware node is added and
vSphere clusters gets assigned to it, the plugin updates the Neutron configuration
file and restarts it.
* Enable Neutron load balancer functionality and configure Horizon UI panel
for LBaaS.

View File

@ -7,10 +7,9 @@ Troubleshooting
Neutron NSX plugin issues
-------------------------
Neutron NSX plugin does not have separate log file, its messages are logged by
neutron server. Default log file on OpenStack controllers for neutron server
is ``/var/log/neutron/server.log``
The Neutron NSX plugin does not have a separate log file, its messages
are logged by the neutron server. The default log file on OpenStack controllers
for neutron server is ``/var/log/neutron/server.log``
Inability to resolve NSX Manager hostname
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -23,14 +22,14 @@ If you see following message::
2016-02-19 ... ERROR neutron ServerNotFoundError: Unable to find the server at nsxmanager.mydom.org
2016-02-19 ... ERROR neutron
It means that controller cannot resolve NSX Manager hostname
(``nsxmanager.mydom.org`` in this example) that is specified in config file.
Check that DNS server IP addresses that you specified in *Host OS DNS Servers*
section of Fuel web UI are correct and reachable by all controllers (pay
attention that default route for controllers is *Public* network). Also verify
that host name that you entered is correct try to resolve it via ``host`` or
``dig`` programs.
It means that the controller cannot resolve the NSX Manager hostname
(``nsxmanager.mydom.org`` in this example) that is specified in the
configuration file.
Check that the DNS server IP addresses that you specified in the
:guilabel:`Host OS DNS Servers` section of the Fuel web UI are correct
and reachable by all controllers; pay attention to that the default route
for controllers is *Public* network. Also, verify that the host name that you
entered is correct by trying to resolve it via the ``host`` or ``dig`` programs.
SSL/TLS certificate problems
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -46,15 +45,14 @@ SSL/TLS certificate problems
2016-02-19 ... ERROR neutron SSLHandshakeError: [Errno 1]_ssl.c:510: error:
14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
This error indicates that you enabled SSL/TLS certificate verification, but
certificate verification failed during connection to NSX Manager. Possible
reasons of this:
This error indicates that you enabled the SSL/TLS certificate verification, but
the certificate verification failed during connection to NSX Manager.
The possible causes are:
#. NSX Manager certificate expired. Log into NSX Manager web GUI and check
certificate validation dates.
#. Check certification authority (CA) certificate is still valid. CA
certificate is specified by ``ca_file`` directive in ``nsx.ini``.
#. Check if the certification authority (CA) certificate is still valid.
The CA certificate is specified by ``ca_file`` directive in ``nsx.ini``.
User access problems
~~~~~~~~~~~~~~~~~~~~
@ -77,12 +75,11 @@ Possible solutions:
* User account does not have sufficient privileges to perform certain
operations.
Non-existent vCenter entity specified
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If some settings of vCenter does not exist plugin will report following message
with varying setting that is not found in vCenter:
If some settings of vCenter do not exist, the plugin will report the following
message with varying settings is not found in vCenter:
::
@ -93,11 +90,10 @@ with varying setting that is not found in vCenter:
Plugin: Configured datacenter_moid not found
2016-02-19 ... ERROR neutron
Non-existent transport zone
~~~~~~~~~~~~~~~~~~~~~~~~~~~
If transport zone does not exist plugin will fail with following message:
If the transport zone does not exist, the plugin will fail with the following message:
::
@ -109,7 +105,7 @@ If transport zone does not exist plugin will fail with following message:
2016-02-19 ... ERROR neutron NsxPluginException: An unexpected error occurred in the NSX
Plugin: Configured vdn_scope_id not found
You can get list of available transport zones via GET request to NSX Manager
You can get the list of available transport zones via GET request to NSX Manager
API URL ``https://nsx-manager.yourdomain.org/api/2.0/vdn/scopes``
Neutron client returns 504 Gateway timeout
@ -124,30 +120,29 @@ Neutron client returns 504 Gateway timeout
</body></html>
This may signal that your NSX Manager or vCenter server are overloaded and
cannot handle incoming requests in certain amount of time. Possible solution to
this problem might be increasing haproxy timeouts for nova API and neutron.
Double values of following settings:
cannot handle the incoming requests in a certain amount of time. A possible
solution to this problem is to increase the haproxy timeouts for nova API and neutron.
Double values of the following settings:
* timeout client
* timeout client-fin
* timeout server
* timeout server-fin
Edit configuration files (they are located in ``/etc/haproxy/conf.d``) and restart
Edit the configuration files in ``/etc/haproxy/conf.d`` and restart
haproxy on all controllers.
NSX platform issues
-------------------
Transport network connectivity
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before debugging problems of VM connectivity when they get spread across
ESXi cluster hosts it is good to verify that transport (underlay) network
Before debugging the problems of VM connectivity when they spread across
ESXi cluster hosts, verify that the transport (underlay) network
functions properly.
You can get list of vmknic adapters that are used for VXLAN tunnels with
You can get the list of vmknic adapters used for VXLAN tunnels with the
``esxcli`` command by providing DVS name. Then use one as output interface for
ping and try to reach another ESXi host.
@ -158,33 +153,33 @@ ping and try to reach another ESXi host.
----------- -------------- ----------- ----------- ------- ----------- -------------
vmk1 50331670 33 0 0 172.16.0.91 255.255.255.0
Provide ``++netstack=vxlan`` option to operate via VXLAN networking stack.
Provide the ``++netstack=vxlan`` option to operate via VXLAN networking stack.
::
~ # ping ++netstack=vxlan -d -s 1550 -I vmk1 172.29.46.12
If host does not get respond try following options:
If the host does not respond, try following options:
* remove options ``-d`` (disable don't fragment bit) and ``-s`` (packet size)
and try to ping. In this case ping will use 56 byte packets and if reply
gets successfully delivered, consider revising MTU on network switches.
* if ping with smaller packets also fails, consider uplink interface
* remove the options ``-d`` (disable don't fragment bit) and ``-s`` (packet size)
and try to ping. In this case the ping will use 56 byte packets and if a reply
gets successfully delivered, consider revising MTU on the network switches.
* if the ping with smaller packets also fails, consider uplink interface
configuration (e.g. VLAN ID).
Verify NSX controllers state
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
NSX controllers must form cluster majority
NSX controllers must form cluster majority.
You can verify NSX controllers cluster state in web UI (``Network & Security ->
Installation -> Management``). All of them must be in normal status.
You can verify NSX controllers cluster state in the Fuel web UI at
:guilabel:`Network & Security` -> :guilabel:`Installation -> Management`.
All of them must be in normal status.
Verify ESXi hosts connectivity with NSX controllers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Check that each ESXi host established connection with NSX controllers
Check that each ESXi host established connection with NSX controllers:
::
@ -192,14 +187,14 @@ Check that each ESXi host established connection with NSX controllers
tcp 0 0 172.16.0.252:51916 192.168.130.101:1234
ESTABLISHED 77203 newreno netcpa-worker
Check that all connections are in ESTABLISHED state. If connection is not
Check that all connections are in the ESTABLISHED state. If connection is not
established:
* Check that ESXi host can reach NSX controller.
* Check that firewall between ESXi host and NSX controller.
* Check that the ESXi host can reach NSX controller.
* Check that the firewall is between ESXi host and NSX controller.
* Check that netcp agent (process that is responsible for communication between
ESXi and NSX controller) is running: ``/etc/init.d/netcpad status``. If it is
not running try to start it and check that it is running:
ESXi and NSX controller) is running: ``/etc/init.d/netcpad status``. If it is
not running, try starting it and check that it is running:
::
@ -207,7 +202,7 @@ established:
~ # /etc/init.d/netcpad status
netCP agent service is running
Verify that Control Plane is Enabled and connection is up::
Verify that Control Plane is Enabled and the connection is up::
~ # esxcli network vswitch dvs vmware vxlan network list --vds-name computeDVS
VXLAN ID Multicast IP Control Plane
@ -220,6 +215,6 @@ Verify that Control Plane is Enabled and connection is up::
vSphere/NSX infrastructure is not running after power outage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
vCenter and NSX management VMs must be started in certain order.
vCenter and NSX management VMs must be started in a certain order.
Please see `VMware KB article
<https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2139067>`_.

View File

@ -1,8 +1,8 @@
Usage
=====
Easiest way to check that plugin works as expected would be trying to create
network or router using ``neutron`` command line client:
The easiest way to check that the plugin works as expected is to create a
network or router using the ``neutron`` command-line tool:
::
@ -10,20 +10,19 @@ network or router using ``neutron`` command line client:
root@node-4:~# . openrc
root@node-4:~# neutron router-create r1
You can monitor plugin actions in ``/var/log/neutron/server.log`` and see how
edges appear in list of ``Networking & Security -> NSX Edges`` pane in vSphere
Web Client. If you see error messages check :ref:`Troubleshooting
<troubleshooting>` section.
You can monitor the plugin actions in ``/var/log/neutron/server.log`` and see
how edges appear in the list of the ``Networking & Security -> NSX Edges``
pane in vSphere Web Client. If you see error messages, check the
:ref:`Troubleshooting <troubleshooting>` section.
VXLAN MTU considerations
------------------------
The VXLAN protocol is used for L2 logical switching across ESXi hosts. VXLAN
adds additional data to the packet, please consider to increase MTU size on
network equipment that is connected to ESXi hosts.
adds additional data to the packet. Consider increasing the MTU size on the
network equipment connected to ESXi hosts.
Consider following calculation when settings MTU size:
Consider the following calculation when settings the MTU size:
Outer IPv4 header == 20 bytes
@ -33,63 +32,61 @@ VXLAN header == 8 bytes
Inner Ethernet frame == 1518 (14 bytes header, 4 bytes 802.1q header, 1500 Payload)
Summarizing all of these we get 1554 bytes. Consider increasing MTU on network
hardware up to 1600 bytes (default MTU value when you are configuring VXLAN on
ESXi hosts during *Host Preparation* step).
Summarizing all of these we get 1554 bytes. Consider increasing MTU on the
network hardware up to 1600 bytes, which is the default MTU value when you
configure VXLAN on ESXi hosts at the *Host Preparation* step.
To configure the jumbo frame please look recommendations from here:
To configure the jumbo frame, check the recommendations from:
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2093324
Instances usage notes
---------------------
Instances that you run in OpenStack cluster with vCenter and NSXv must have
Instances that you run in an OpenStack cluster with vCenter and NSXv must have
VMware Tools installed, otherwise there will be no connectivity and security
groups functionality.
Neutron usage notes
-------------------
The only way to create distributed router is to use neutron CLI tool:
The only way to create a distributed router is to use the Neutron CLI tool:
.. code-block:: bash
$ neutron router-create dvr --distributed True
Creation of exclusive tenant router is not supported in OpenStack dashboard
(Horizon). You can create exclusive router using Neutron CLI tool:
The creation of an exclusive tenant router is not supported in the OpenStack
dashboard (Horizon). You can create an exclusive router using Neutron CLI tool:
.. code-block:: bash
$ neutron router-create DbTierRouter-exclusive --router_type exclusive
During creation of external network for tenants you must specify physical
network (``--provider:physical_network`` parameter) that will be used to carry
VM traffic into physical network segment. For Neutron with NSX plugin this
parameter must be set to MoRef ID of portgroup which provides connectivity with
physical network to NSX edge nodes.
During the creation of an external network for tenants, you must specify
a physical network (the ``--provider:physical_network`` parameter) that
will be used to carry the VM traffic into the physical network segment.
For Neutron with the NSX plugin, this parameter must be set to MoRef ID of
the portgroup which provides connectivity with the physical network to the
NSX edge nodes.
.. code-block:: bash
$ neutron net-create External --router:external --provider:physical_network network-222
Loadbalancer as a service support
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Starting from version 2.0.0 plugin enables Neutron load balancing functionality
and enables it in OpenStack dashboard. By default Neutron NSX plugin gets
configured with LBaaSv2 support.
Starting from version 2.0.0, the plugin enables the Neutron load balancing
functionality and enables it in the OpenStack dashboard. By default, the
Neutron NSX plugin is configured with LBaaSv2 support.
.. note::
Load balancing functionality requires attachment of an **exclusive** or
**distributed** router to the subnet prior to provisioning of an load
The load balancing functionality requires attachment of an **exclusive** or
**distributed** router to the subnet prior to the provisioning of a load
balancer.
Create exclusive or distributed router and connect it to subnet.
Create an exclusive or distributed router and connect it to subnet.
.. code-block:: bash
@ -105,13 +102,14 @@ Create servers and permit HTTP traffic.
$ neutron security-group-rule-create --protocol tcp --port-range-min 80 \
--port-range-max 80 default
Create loadbalancer, specify name and subnet where you want to balance traffic.
Create a loadbalancer, specify a name and a subnet where you want to balance
the traffic.
.. code-block:: bash
$ neutron lbaas-loadbalancer-create --name lb-www www-subnet
Create listener.
Create a listener.
.. code-block:: bash
@ -125,7 +123,7 @@ Create a load balancer pool.
$ neutron lbaas-pool-create --lb-method ROUND_ROBIN --listener www-listener \
--protocol HTTP --name www-pool
Find out IP addresses of your VMs and create members in pool.
Find out the IP addresses of your VMs and create members in pool.
.. code-block:: bash
@ -139,7 +137,7 @@ Create a virtual IP address.
$ neutron lb-vip-create --name lb_vip --subnet-id <private-subnet-id> \
--protocol-port 80 --protocol HTTP http-pool
Allocate floating IP and associate it with VIP.
Allocate the floating IP and associate it with VIP.
.. code-block:: bash