Fix errors and style in the guide
Change-Id: I57f1c39dc0a0a5542c64228dd0779d5f45a179b7
This commit is contained in:
parent
615c977151
commit
a08160d729
|
@ -3,30 +3,31 @@
|
||||||
Welcome to Fuel NSXv plugin's documentation!
|
Welcome to Fuel NSXv plugin's documentation!
|
||||||
============================================
|
============================================
|
||||||
|
|
||||||
Fuel NSXv plugin allows you to deploy OpenStack cluster which can use
|
The Fuel NSXv plugin allows you to deploy an OpenStack cluster which can use
|
||||||
pre-existing vSphere infrastructure with NSX network virtualization platform.
|
a pre-existing vSphere infrastructure with the NSX network virtualization
|
||||||
|
platform.
|
||||||
|
|
||||||
Plugin installs Neutron NSX core plugin and allows logical network equipment
|
The plugin installs the Neutron NSX core plugin and allows logical network
|
||||||
(routers, networks) to be created as NSX entities.
|
equipment (routers, networks) to be created as NSX entities.
|
||||||
|
|
||||||
Plugin can work with VMware NSX 6.1.3, 6.1.4, 6.2.1.
|
The plugin supports VMware NSX 6.1.3, 6.1.4, 6.2.1.
|
||||||
|
|
||||||
Plugin versions:
|
Plugin versions:
|
||||||
|
|
||||||
* 3.x.x series is compatible with Fuel 9.0. Tests were performed on plugin v3.0 with
|
* 3.x.x series is compatible with Fuel 9.0. Tests were performed on the plugin
|
||||||
VMware NSX 6.2.0 and vCenter 5.5.
|
v3.0 with VMware NSX 6.2.0 and vCenter 5.5.
|
||||||
|
|
||||||
* 2.x.x series is compatible with Fuel 8.0. Tests were performed on plugin v2.0 with
|
* 2.x.x series is compatible with Fuel 8.0. Tests were performed on the plugin
|
||||||
VMware NSX 6.2.0 and vCenter 5.5.
|
v2.0 with VMware NSX 6.2.0 and vCenter 5.5.
|
||||||
|
|
||||||
* 1.x.x series is compatible with Fuel 7.0. Tests were performed on plugin v1.2 with
|
* 1.x.x series is compatible with Fuel 7.0. Tests were performed on the plugin
|
||||||
VMware NSX 6.1.4 and vCenter 5.5.
|
v1.2 with VMware NSX 6.1.4 and vCenter 5.5.
|
||||||
|
|
||||||
Through documentation we use terms "NSX" and "NSXv" interchangeably, both of
|
This documentation uses the terms "NSX" and "NSXv" interchangeably; both of
|
||||||
these terms refer to `VMware NSX virtualized network platform
|
these terms refer to `VMware NSX virtualized network platform
|
||||||
<https://www.vmware.com/products/nsx>`_.
|
<https://www.vmware.com/products/nsx>`_.
|
||||||
|
|
||||||
Pre-built package of the plugin you can find in
|
The pre-built package of the plugin is in
|
||||||
`Fuel Plugin Catalog <https://www.mirantis.com/products/openstack-drivers-and-plugins/fuel-plugins>`_.
|
`Fuel Plugin Catalog <https://www.mirantis.com/products/openstack-drivers-and-plugins/fuel-plugins>`_.
|
||||||
|
|
||||||
Documentation contents
|
Documentation contents
|
||||||
|
|
|
@ -1,13 +1,13 @@
|
||||||
How to build the plugin from source
|
How to build the plugin from source
|
||||||
===================================
|
===================================
|
||||||
|
|
||||||
To build the plugin you first need to install fuel-plugin-builder_ 4.1.0
|
To build the plugin, you first need to install fuel-plugin-builder_ 4.1.0
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
$ pip install fuel-plugin-builder==4.1.0
|
$ pip install fuel-plugin-builder==4.1.0
|
||||||
|
|
||||||
After that you can build the plugin:
|
Build the plugin:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
|
@ -15,28 +15,28 @@ After that you can build the plugin:
|
||||||
|
|
||||||
$ cd fuel-plugin-nsxv/
|
$ cd fuel-plugin-nsxv/
|
||||||
|
|
||||||
librarian-puppet_ ruby package is required to be installed. It is used to fetch
|
The librarian-puppet_ ruby package is required to be installed. It is used to fetch
|
||||||
upstream fuel-library_ puppet modules that plugin use. It can be installed via
|
upstream fuel-library_ puppet modules that the plugin uses. It can be installed via
|
||||||
*gem* package manager:
|
the *gem* package manager:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
$ gem install librarian-puppet
|
$ gem install librarian-puppet
|
||||||
|
|
||||||
or if you are using ubuntu linux - you can install it from the repository:
|
or if you are using ubuntu linux, you can install it from the repository:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
$ apt-get install librarian-puppet
|
$ apt-get install librarian-puppet
|
||||||
|
|
||||||
and build plugin:
|
and build the plugin:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
$ fpb --build .
|
$ fpb --build .
|
||||||
|
|
||||||
fuel-plugin-builder will produce .rpm package of the plugin which you need to
|
fuel-plugin-builder will produce an .rpm package of the plugin which you need to
|
||||||
upload to Fuel master node:
|
upload to the Fuel master node:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
|
|
|
@ -1,82 +1,85 @@
|
||||||
Configuration
|
Configuration
|
||||||
=============
|
=============
|
||||||
|
|
||||||
Switch to Networks tab of the Fuel web UI and click on *Settings*/*Other*
|
Switch to the :guilabel:`Networks` tab of the Fuel web UI and click the
|
||||||
section, the plugin checkbox enabled by default. For reasons of clarity not all
|
:guilabel:`Settings`/`Other` section. The plugin checkbox is enabled
|
||||||
settings are shown on the screenshot below:
|
by default. The screenshot below shows only the settings in focus:
|
||||||
|
|
||||||
.. image:: /image/nsxv-settings-filled.png
|
.. image:: /image/nsxv-settings-filled.png
|
||||||
:scale: 60 %
|
:scale: 60 %
|
||||||
|
|
||||||
Several plugin input fields refer to MoRef ID (Managed Object Reference ID),
|
Several plugin input fields refer to MoRef ID (Managed Object Reference ID);
|
||||||
these object IDs can be obtained via Managed Object Browser which is located on
|
these object IDs can be obtained using Managed Object Browser, which is located on
|
||||||
the vCenter host, e.g. https://<vcenter_host>/mob
|
the vCenter host, e.g. https://<vcenter_host>/mob
|
||||||
|
|
||||||
Starting from Fuel 9.0 settings on web UI are not disabled and it is possible
|
Starting from Fuel 9.0, the settings on the Fuel web UI are not disabled
|
||||||
to run deployment process with changed settings against working cluster. This
|
and it is possible to run the deployment process with the changed settings
|
||||||
change also impacts plugin settings which means that plugin settings can be
|
against a working cluster.
|
||||||
changed and applied to Neutron. From plugin perspective it is not possible to
|
This change also impacts the plugin settings as they can be changed and
|
||||||
disable specific input fields, below settings that break Neutron operations are
|
applied to Neutron. From the plugin perspective, it is not possible to
|
||||||
commented.
|
disable specific input fields, the settings below that break Neutron
|
||||||
|
operations are commented.
|
||||||
|
|
||||||
Plugin contains the following settings:
|
The plugin contains the following settings:
|
||||||
|
|
||||||
#. NSX Manager hostname (or IP) -- if you are going to use hostname in this
|
#. NSX Manager hostname (or IP) -- if you are going to use hostname in this
|
||||||
textbox be sure that your OpenStack controller will be able to resolve it.
|
textbox, ensure that your OpenStack controller can resolve the hostname.
|
||||||
Add necessary DNS servers in *Host OS DNS Servers* section. NSX Manager
|
Add necessary DNS servers in the :guilabel:`Host OS DNS Servers` section.
|
||||||
must be connected to vCenter server which you specified on VMware tab.
|
NSX Manager must be connected to the vCenter server specified on
|
||||||
|
the VMware tab.
|
||||||
|
|
||||||
OpenStack Controller must have L3 connectivity with NSX Manager through
|
OpenStack Controller must have L3 connectivity with NSX Manager through
|
||||||
Public network.
|
the Public network.
|
||||||
|
|
||||||
#. NSX Manager username.
|
#. NSX Manager username.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
In order for Neutron NSX plugin to operate properly account that it uses
|
For the Neutron NSX plugin to operate properly, the account in use
|
||||||
must have Enterprise administrator role.
|
must have an Enterprise administrator role.
|
||||||
|
|
||||||
#. NSX Manager password.
|
#. NSX Manager password.
|
||||||
|
|
||||||
#. Bypass NSX Manager certificate verification -- if enabled then HTTPS
|
#. Bypass NSX Manager certificate verification -- if enabled, the HTTPS
|
||||||
connection will not be verified. Otherwise two options are available:
|
connection is not verified. Otherwise, the two following options are
|
||||||
|
available:
|
||||||
|
|
||||||
* setting "CA certificate file" appear below making it possible to upload CA
|
* The setting "CA certificate file" appears below making it possible to
|
||||||
certificate that issued NSX Manager certificate.
|
upload a CA certificate that issued the NSX Manager certificate.
|
||||||
|
|
||||||
* no CA certificate provided, then NSX Manager certificate will be verified
|
* With no CA certificate provided, the NSX Manager certificate is verified
|
||||||
against CA certificate bundle that comes by default within OpenStack
|
against the CA certificate bundle that comes by default within the
|
||||||
controller node operating system.
|
OpenStack controller node operating system.
|
||||||
|
|
||||||
#. CA certificate file -- file in PEM format that contains bundle of CA
|
#. CA certificate file -- a file in PEM format that contains a bundle of CA
|
||||||
certificates which will be used by the plugin during NSX Manager certificate
|
certificates used by the plugin during the NSX Manager certificate
|
||||||
verification. If no file is present, then HTTPS connection will not be
|
verification. If no file is present, the HTTPS connection is not
|
||||||
verified.
|
verified.
|
||||||
|
|
||||||
#. Datacenter MoRef ID -- ID of Datacenter where NSX Edge nodes will be
|
#. Datacenter MoRef ID -- an ID of the Datacenter where the NSX Edge nodes
|
||||||
deployed.
|
are deployed.
|
||||||
|
|
||||||
#. Resource pool MoRef ID -- resource pool for NSX Edge nodes deployment.
|
#. Resource pool MoRef ID -- a resource pool for the NSX Edge nodes deployment.
|
||||||
Setting change on deployed cluster affects only new Edges.
|
Changing this setting on a deployed cluster affects only the new Edges.
|
||||||
|
|
||||||
#. Datastore MoRef ID -- datastore for NSX Edge nodes. Change of datastore
|
#. Datastore MoRef ID -- a datastore for NSX Edge nodes. A change of the datastore
|
||||||
setting on deployed cluster affects only new Edges.
|
setting on the deployed cluster affects only the new Edges.
|
||||||
|
|
||||||
#. External portgroup MoRef ID -- portgroup through which NSX Edge nodes get
|
#. External portgroup MoRef ID -- a portgroup through which the NSX Edge nodes get
|
||||||
connectivity with physical network.
|
connectivity with the physical network.
|
||||||
|
|
||||||
#. Transport zone MoRef ID -- transport zone for VXLAN logical networks.
|
#. Transport zone MoRef ID -- a transport zone for VXLAN logical networks.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
This ID can be fetched using NSX Manager API
|
This ID can be fetched using NSX Manager API
|
||||||
https://<nsx_manager_host>/api/2.0/vdn/scopes
|
https://<nsx_manager_host>/api/2.0/vdn/scopes
|
||||||
|
|
||||||
#. Distributed virtual switch MoRef ID -- ID of vSwitch connected to Edge
|
#. Distributed virtual switch MoRef ID -- ID of vSwitch connected to the Edge
|
||||||
cluster.
|
cluster.
|
||||||
|
|
||||||
#. NSX backup Edge pool -- size of NSX Edge nodes and size of Edge pool, value
|
#. NSX backup Edge pool -- the size of the NSX Edge nodes and the size of Edge
|
||||||
must follow format: <edge_type>:[edge_size]:<min_edges>:<max_edges>.
|
pool. The value must be in the format: <edge_type>:[edge_size]:<min_edges>:<max_edges>.
|
||||||
|
|
||||||
**edge_type** can take the following values: *service* or *vdr* (service and
|
**edge_type** can take the following values: *service* or *vdr* (service and
|
||||||
distributed edge, respectively).
|
distributed edge, respectively).
|
||||||
|
@ -86,13 +89,13 @@ Plugin contains the following settings:
|
||||||
|
|
||||||
NSX *vdr* nodes performs distributed routing and bridging.
|
NSX *vdr* nodes performs distributed routing and bridging.
|
||||||
|
|
||||||
**edge_size** can take following values: *compact*, *large* (default value if
|
**edge_size** can take the following values: *compact*, *large* (the default
|
||||||
omitted), *xlarge*, *quadlarge*.
|
value if omitted), *xlarge*, *quadlarge*.
|
||||||
|
|
||||||
**min_edges** and **max_edges** defines minimum and maximum amount of NSX
|
**min_edges** and **max_edges** define the minimum and maximum amount of NSX
|
||||||
Edge nodes in pool.
|
Edge nodes in pool.
|
||||||
|
|
||||||
Consider following table that describes NSX Edge types:
|
The following table describes the NSX Edge types:
|
||||||
|
|
||||||
========= ===================
|
========= ===================
|
||||||
Edge size Edge VM parameters
|
Edge size Edge VM parameters
|
||||||
|
@ -103,104 +106,108 @@ Plugin contains the following settings:
|
||||||
xlarge 6 vCPU 8192 MB vRAM
|
xlarge 6 vCPU 8192 MB vRAM
|
||||||
========= ===================
|
========= ===================
|
||||||
|
|
||||||
Consider following example values:
|
Example values:
|
||||||
|
|
||||||
``service:compact:1:2,vdr:compact:1:3``
|
``service:compact:1:2,vdr:compact:1:3``
|
||||||
|
|
||||||
``service:xlarge:2:6,service:large:4:10,vdr:large:2:4``
|
``service:xlarge:2:6,service:large:4:10,vdr:large:2:4``
|
||||||
|
|
||||||
#. Enable HA for NSX Edges -- if you enable this option NSX Edges will be
|
#. Enable HA for NSX Edges -- if you enable this option, the NSX Edges will be
|
||||||
deployed in active/standby pair on different ESXi hosts.
|
deployed in active/standby pair on different ESXi hosts.
|
||||||
Setting change on deployed cluster affects only new Edges.
|
Changing this setting on a deployed cluster affects only the new Edges.
|
||||||
|
|
||||||
#. Init metadata infrastructure -- If enabled, instance will attempt to
|
#. Init metadata infrastructure -- if enabled, the instance attempts to
|
||||||
initialize the metadata infrastructure to access to metadata proxy service,
|
initialize the metadata infrastructure to access to metadata proxy service;
|
||||||
otherwise metadata proxy will not be deployed.
|
otherwise, the metadata proxy is not deployed.
|
||||||
|
|
||||||
#. Bypass metadata service certificate verification -- If enabled connection
|
#. Bypass metadata service certificate verification -- if enabled, the connection
|
||||||
metadata service will be listening HTTP port. Otherwise self-signed
|
metadata service listens the HTTP port. Otherwise, a self-signed
|
||||||
certificate will be generated, installed into edge nodes and
|
certificate is generated, installed into the Edge nodes, and
|
||||||
nova-api-metadata, HTTPS will be enabled.
|
nova-api-metadata; HTTPS is enabled.
|
||||||
|
|
||||||
#. Which network will be used to access the nova-metadata -- select network
|
#. Which network will be used to access the nova-metadata -- select a network
|
||||||
through which nova-api-metadata service will be available for NSX edge
|
through which the nova-api-metadata service will be available for the
|
||||||
nodes. Currently two options are available *Public* and *Management*
|
NSX Edge nodes. Currently two options are available the *Public* and *Management*
|
||||||
networks.
|
networks.
|
||||||
|
|
||||||
If *Management* network selected, then free IP address from management
|
If the *Management* network is selected, then the free IP address from the
|
||||||
network range for nova-api-metadata will be allocated automatically and
|
management network range for nova-api-metadata is allocated automatically;
|
||||||
you don't need to specify your own IP address, netmask, gateway.
|
you do not need to specify your own IP address, netmask, gateway.
|
||||||
|
|
||||||
If *Public* network selected, then you need to specify you own IP address, netmask
|
If the *Public* network is selected, then you need to specify you own IP
|
||||||
and gateway. See metadata related settings below.
|
address, netmask, and gateway. See the metadata related settings below.
|
||||||
|
|
||||||
.. warning::
|
.. warning::
|
||||||
|
|
||||||
Do not change metadata settings after cluster is deployed!
|
Do not change the metadata settings after the cluster is deployed.
|
||||||
|
|
||||||
To enable Nova metadata service, the following settings must be set:
|
To enable the Nova metadata service, the following settings must be set:
|
||||||
|
|
||||||
#. Metadata allowed ports -- comma separated list of TCP port allowed access to
|
#. Metadata allowed ports -- a comma-separated list of TCP ports allowed access
|
||||||
the metadata proxy, in addition to 80, 443 and 8775.
|
to the metadata proxy in addition to 80, 443 and 8775.
|
||||||
|
|
||||||
#. Metadata portgroup MoRef ID -- portgroup MoRef ID for metadata proxy service.
|
#. Metadata portgroup MoRef ID -- a portgroup MoRef ID for the metadata proxy
|
||||||
|
|
||||||
#. Metadata proxy IP addresses -- comma separated IP addresses used by Nova
|
|
||||||
metadata proxy service.
|
|
||||||
|
|
||||||
#. Management network netmask -- management network netmask for metadata proxy
|
|
||||||
service.
|
service.
|
||||||
|
|
||||||
#. Management network default gateway -- management network gateway for
|
#. Metadata proxy IP addresses -- comma-separated IP addresses used by Nova
|
||||||
metadata proxy service.
|
metadata proxy service.
|
||||||
|
|
||||||
#. Floating IP ranges -- dash separated IP addresses allocation pool from
|
#. Management network netmask -- management network netmask for the metadata
|
||||||
|
proxy service.
|
||||||
|
|
||||||
|
#. Management network default gateway -- management network gateway for
|
||||||
|
the metadata proxy service.
|
||||||
|
|
||||||
|
#. Floating IP ranges -- dash-separated IP addresses allocation pool from
|
||||||
external network, e.g. "192.168.30.1-192.168.30.200".
|
external network, e.g. "192.168.30.1-192.168.30.200".
|
||||||
|
|
||||||
#. External network CIDR -- network in CIDR notation that includes floating IP ranges.
|
#. External network CIDR -- network in CIDR notation that includes floating IP ranges.
|
||||||
|
|
||||||
#. Gateway -- default gateway for external network, if not defined, first IP address
|
#. Gateway -- default gateway for the external network; if not defined, the
|
||||||
of the network is used.
|
first IP address of the network is used.
|
||||||
|
|
||||||
#. Internal network CIDR -- network in CIDR notation for use as internal.
|
#. Internal network CIDR -- network in CIDR notation for use as internal.
|
||||||
|
|
||||||
#. DNS for internal network -- comma separated IP addresses of DNS server for
|
#. DNS for internal network -- comma-separated IP addresses of DNS server for
|
||||||
internal network.
|
internal network.
|
||||||
|
|
||||||
If you tick *Additional settings* checkbox following options will become
|
If you tick the :guilabel:`Additional settings` checkbox, the following
|
||||||
available for configuration:
|
options will become available for configuration:
|
||||||
|
|
||||||
#. Instance name servers -- comma separated IP addresses of name servers that
|
#. Instance name servers -- comma-separated IP addresses of the name servers
|
||||||
will be passed to instance.
|
that are passed to the instance.
|
||||||
|
|
||||||
#. Task status check interval -- asynchronous task status check interval,
|
#. Task status check interval -- asynchronous task status check interval,
|
||||||
default is 2000 (millisecond).
|
the default value is 2000 (millisecond).
|
||||||
|
|
||||||
#. Maximum tunnels per vnic -- specify maximum amount of tunnels per vnic,
|
#. Maximum tunnels per vnic -- specify the maximum amount of tunnels per vnic;
|
||||||
possible range of values 1-110 (20 is used if no other value is provided).
|
the possible range of values is 1-110 (20 is used if no other value is
|
||||||
|
provided).
|
||||||
|
|
||||||
#. API retries -- maximum number of API retries (10 by default).
|
#. API retries -- maximum number of API retries (10 by default).
|
||||||
|
|
||||||
#. Enable SpoofGuard -- option allows to control behaviour of port-security
|
#. Enable SpoofGuard -- the option allows to control the behaviour of
|
||||||
feature that prevents traffic flow if IP address of VM that was reported by
|
the port-security feature that prevents traffic flow if the IP address
|
||||||
VMware Tools does not match source IP address that is observed in outgoing
|
of the VM that was reported by VMware Tools does not match the source IP
|
||||||
VM traffic (consider the case when VM was compromised).
|
address that is observed in outgoing VM traffic (consider the case when
|
||||||
|
VM was compromised).
|
||||||
|
|
||||||
#. Tenant router types -- ordered list of preferred tenant router types (default
|
#. Tenant router types -- an ordered list of preferred tenant router types
|
||||||
value is ``shared, distributed, exclusive``).
|
(the default value is ``shared, distributed, exclusive``).
|
||||||
|
|
||||||
* shared -- multiple shared routers may own one edge VM.
|
* shared -- multiple shared routers may own one edge VM.
|
||||||
* exclusive -- each router own one edge VM.
|
* exclusive -- each router owns one edge VM.
|
||||||
* distributed -- same as exclusive, but edge is created as distributed
|
* distributed -- same as exclusive, but edge is created as a distributed
|
||||||
logical router. VM traffic get routed via DLR kernel modules on each
|
logical router. The VM traffic is routed via DLR kernel modules on each
|
||||||
ESXi host.
|
ESXi host.
|
||||||
|
|
||||||
#. Exclusive router size -- size of edge for exclusive router
|
#. Exclusive router size -- the size of edge for the exclusive router
|
||||||
(value must be one of *compact*, *large*, *quadlarge* or *xlarge*).
|
(the value must be one of *compact*, *large*, *quadlarge* or *xlarge*).
|
||||||
|
|
||||||
#. Edge user -- user that will be created on edge VMs for remote login.
|
#. Edge user -- the user that will be created on edge VMs for remote login.
|
||||||
|
|
||||||
#. Edge password -- password for edge VMs. It must match following rules:
|
#. Edge password -- password for edge VMs. The password must match
|
||||||
|
the following rules:
|
||||||
|
|
||||||
* not less 12 characters (max 255 chars)
|
* not less 12 characters (max 255 chars)
|
||||||
* at least 1 upper case letter
|
* at least 1 upper case letter
|
||||||
|
@ -210,13 +217,13 @@ Plugin contains the following settings:
|
||||||
|
|
||||||
.. warning::
|
.. warning::
|
||||||
|
|
||||||
Plugin cannot verify that password conforms security policy. If you enter
|
The plugin cannot verify that password conforms to the security policy.
|
||||||
password that does not match policy, Neutron server will be not able to
|
If you enter the password that does not match the policy, Neutron server
|
||||||
create routers and deployment process will stop, because NSX will not
|
will be not able to create routers and the deployment process will stop,
|
||||||
permit creating edge nodes with password that does not match security
|
because NSX cannot permit creating edge nodes with a password that does
|
||||||
policy.
|
not match the security policy.
|
||||||
|
|
||||||
#. DHCP lease time -- DHCP lease time in seconds for VMs. Default value is
|
#. DHCP lease time -- DHCP lease time in seconds for VMs. The default value is
|
||||||
86400 (24 hours).
|
86400 (24 hours).
|
||||||
|
|
||||||
#. Coordinator URL -- URL for distributed locking coordinator.
|
#. Coordinator URL -- URL for the distributed locking coordinator.
|
||||||
|
|
|
@ -4,33 +4,35 @@ OpenStack environment notes
|
||||||
Environment creation
|
Environment creation
|
||||||
--------------------
|
--------------------
|
||||||
|
|
||||||
Before start actual deployment process please verify that you vSphere
|
Before you start the actual deployment, verify that your vSphere
|
||||||
infrastructure (vCenter and NSXv) is configured and functions properly,
|
infrastructure (vCenter and NSXv) is configured and functions properly.
|
||||||
Fuel NSXv plugin cannot deploy vSphere infrastructure, it must be up and
|
The Fuel NSXv plugin cannot deploy vSphere infrastructure; The
|
||||||
running before OpenStack deployment.
|
vSphere infrastructure must be up and running before the OpenStack deployment.
|
||||||
|
|
||||||
To use NSXv plugin create new OpenStack environment via the Fuel web UI follow
|
To use the NSXv plugin, create a new OpenStack environment using the Fuel web
|
||||||
these steps:
|
UI by doing the following:
|
||||||
|
|
||||||
#. On *Compute* configuration step tick 'vCenter' checkbox
|
#. On the :guilabel:`Compute` configuration step, tick the :guilabel:`vCenter`
|
||||||
|
checkbox:
|
||||||
|
|
||||||
.. image:: /image/wizard-step1.png
|
.. image:: /image/wizard-step1.png
|
||||||
:scale: 70 %
|
:scale: 70 %
|
||||||
|
|
||||||
#. After plugin gets installed it will be possible to use *Neutron with
|
#. After the plugin installation, use :guilabel:`Neutron with
|
||||||
NSXv plugin* at 'Networking Setup' step:
|
NSXv plugin` at the :guilabel:`Networking Setup` step:
|
||||||
|
|
||||||
.. image:: /image/wizard-step2.png
|
.. image:: /image/wizard-step2.png
|
||||||
:scale: 70 %
|
:scale: 70 %
|
||||||
|
|
||||||
#. Once you get environment created add one or more controller nodes.
|
#. Once you get the environment created, add one or more controller nodes.
|
||||||
|
|
||||||
Pay attention on which interface you assign *Public* network, OpenStack
|
Pay attention to on which interface you assign the *Public* network. The
|
||||||
controllers must have connectivity with NSX Manager host through *Public*
|
OpenStack controllers must have connectivity with the NSX Manager host
|
||||||
network since it is used as default route for packets.
|
through the *Public* network since the *Public* network is the default
|
||||||
|
route for packets.
|
||||||
|
|
||||||
During deployment process plugin creates simple network topology for admin
|
During the deployment, the plugin creates a simple network topology for
|
||||||
tenant. It creates provider network which connects tenants with transport
|
the admin tenant. The plugin creates a provider network which connects the
|
||||||
(physical) network, one internal network and router that is connected to both
|
tenants with the transport (physical) network: one internal network and
|
||||||
networks.
|
a router that is connected to both networks.
|
||||||
|
|
||||||
|
|
|
@ -1,18 +1,18 @@
|
||||||
Installation
|
Installation
|
||||||
============
|
============
|
||||||
|
|
||||||
#. Download plugin .rpm package from the `Fuel plugin catalog`_.
|
#. Download the plugin .rpm package from the `Fuel plugin catalog`_.
|
||||||
|
|
||||||
#. Upload package to Fuel master node.
|
#. Upload the package to the Fuel master node.
|
||||||
|
|
||||||
#. Install the plugin with ``fuel`` command line tool:
|
#. Install the plugin with the ``fuel`` command-line tool:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
[root@nailgun ~] fuel plugins --install nsxv-3.0-3.0.0-1.noarch.rpm
|
[root@nailgun ~] fuel plugins --install nsxv-3.0-3.0.0-1.noarch.rpm
|
||||||
|
|
||||||
|
|
||||||
#. Verify that the plugin is installed successfully:
|
#. Verify that the plugin installation is successful:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
|
@ -21,16 +21,16 @@ Installation
|
||||||
---|------|---------|----------------
|
---|------|---------|----------------
|
||||||
1 | nsxv | 3.0.0 | 4.0.0
|
1 | nsxv | 3.0.0 | 4.0.0
|
||||||
|
|
||||||
After installation plugin can be used for new OpenStack clusters, it is not
|
After the installation, the plugin can be used on new OpenStack clusters;
|
||||||
possible to enable plugin on deployed clusters.
|
you cannot enable the plugin on the deployed clusters.
|
||||||
|
|
||||||
Uninstallation
|
Uninstallation
|
||||||
--------------
|
--------------
|
||||||
|
|
||||||
Before uninstalling plugin be sure that there no environments left that use the
|
Before uninstalling the plugin, ensure there no environments left that use the
|
||||||
plugin, otherwise it is not possible to uninstall it.
|
plugin, otherwise the uninstallation is not possible.
|
||||||
|
|
||||||
To uninstall plugin run following:
|
To uninstall the plugin, run following:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
|
|
|
@ -1,27 +1,29 @@
|
||||||
Known issues
|
Known issues
|
||||||
============
|
============
|
||||||
|
|
||||||
Deployment process may fail when big amount of NSX edge nodes is specified in backup pool
|
Deployment process may fail when a big amount of NSX edge nodes is specified in backup pool
|
||||||
-----------------------------------------------------------------------------------------
|
-------------------------------------------------------------------------------------------
|
||||||
|
|
||||||
When specifying huge amount of edge nodes in *NSX backup Edge pool* setting
|
When specifying a huge amount of edge nodes in the
|
||||||
deployment process may fail, because Neutron NSX plugin tries to provision
|
:guilabel:`NSX backup Edge pool` setting, the deployment process may fail,
|
||||||
|
because the Neutron NSX plugin tries to provision a
|
||||||
specified amount of backup nodes while Neutron server waits until this
|
specified amount of backup nodes while Neutron server waits until this
|
||||||
operation is finished. Default timeout for neutron-server start is about 7
|
operation is finished. The default timeout for neutron-server start is about 7
|
||||||
minutes. If you encounter such behaviour wait until all backup edge nodes are
|
minutes. If you encounter such behaviour, wait until all the backup edge nodes
|
||||||
provisioned on vSphere side and rerun deployment process.
|
are provisioned on the vSphere side and rerun the deployment process.
|
||||||
|
|
||||||
Change of ``admin_state_up`` does not affect actual port state
|
Changing ``admin_state_up`` does not affect actual port state
|
||||||
--------------------------------------------------------------
|
-------------------------------------------------------------
|
||||||
|
|
||||||
NSX plugin does not change *admin_state_up* of a port. Even if operator
|
The NSX plugin does not change *admin_state_up* of a port. Even if the operator
|
||||||
executes ``neutron port-update`` command, port will remain in active state, but
|
executes the ``neutron port-update`` command, port will remain in active state,
|
||||||
will be reported as ``admin_state_up: False`` by ``neutron port-show`` command.
|
but will be reported as ``admin_state_up: False`` by the ``neutron port-show``
|
||||||
|
command.
|
||||||
|
|
||||||
3 OSTF fails in configuration with ceilometer
|
3 OSTF fails in configuration with Ceilometer
|
||||||
---------------------------------------------
|
---------------------------------------------
|
||||||
|
|
||||||
Test nsxv_ceilometer marked as Passed.
|
Test nsxv_ceilometer marked as *Passed*.
|
||||||
We don't have TestVM image due to disabled 'nova' AZ. So we wish to see these
|
We do not have a TestVM image due to the disabled 'nova' AZ.
|
||||||
tests with TestVM-VMDK image only.
|
So we want to see these tests with the TestVM-VMDK image only.
|
||||||
See `LP1592357 <https://bugs.launchpad.net/fuel-plugin-nsxv/+bug/1592357>`_.
|
See `LP1592357 <https://bugs.launchpad.net/fuel-plugin-nsxv/+bug/1592357>`_.
|
|
@ -2,63 +2,65 @@ Limitations
|
||||||
===========
|
===========
|
||||||
|
|
||||||
Vcenter cluster names must be unique within the data center
|
Vcenter cluster names must be unique within the data center
|
||||||
---------------------------------
|
-----------------------------------------------------------
|
||||||
|
|
||||||
vCenter inventory allows user to form hierarchy by organizing vSphere entities
|
vCenter inventory allows the user to form a hierarchy by organizing
|
||||||
into folders. Clusters by default are created on first level of hierarchy, then
|
vSphere entities into folders. Clusters, by default, are created on the
|
||||||
they can be put into folders. Plugin supports the clusters that are located on
|
first level of hierarchy, then the clusters can be put into folders.
|
||||||
|
The plugin supports the clusters that are located on
|
||||||
all levels of the hierarchy, thus cluster names must be unique.
|
all levels of the hierarchy, thus cluster names must be unique.
|
||||||
|
|
||||||
Incompatible roles are explicitly hidden
|
Incompatible roles are explicitly hidden
|
||||||
----------------------------------------
|
----------------------------------------
|
||||||
|
|
||||||
Is is worth to mention that following roles gets disabled for OpenStack
|
The following roles are disabled for an OpenStack environment with the plugin:
|
||||||
environment with the plugin:
|
|
||||||
|
|
||||||
* Compute
|
* Compute
|
||||||
* Ironic
|
* Ironic
|
||||||
* Cinder
|
* Cinder
|
||||||
|
|
||||||
Compute and Ironic are incompatible, because NSX v6.x switch is available
|
Compute and Ironic are incompatible, because an NSX v6.x switch is available
|
||||||
exclusively for ESXi, so it is not possible to pass traffic inside compute node
|
exclusively for ESXi; it is not possible to pass the traffic inside a compute node
|
||||||
that runs Linux and KVM. Cinder is disabled due to inability to pass LVM based
|
that runs Linux and KVM. Cinder is disabled due to inability to pass LVM based
|
||||||
volumes via iSCSI to ESXi hosts, instead use "Cinder proxy for VMware
|
volumes via iSCSI to ESXi hosts; instead, use the *Cinder proxy for VMware
|
||||||
datastore" role.
|
datastore* role.
|
||||||
|
|
||||||
Public floating IP range is ignored
|
Public floating IP range is ignored
|
||||||
-----------------------------------
|
-----------------------------------
|
||||||
|
|
||||||
Fuel requires that floating IP range must be within *Public* IP range. This
|
Fuel requires that the floating IP range must be within the *Public* IP range.
|
||||||
requirement does not make sense with NSXv plugin, because edge nodes provide
|
This requirement does not make sense with the NSXv plugin, because edge nodes
|
||||||
connectivity for virtual machines, not controllers. Nevertheless floating IP
|
provide connectivity for virtual machines, not controllers. Nevertheless,
|
||||||
range for *Public* network must be assigned. Plugin provides it own field for
|
the floating IP range for the *Public* network must be assigned. The plugin
|
||||||
floating IP range.
|
provides its own field for the floating IP range.
|
||||||
|
|
||||||
.. image:: /image/floating-ip.png
|
.. image:: /image/floating-ip.png
|
||||||
:scale: 70 %
|
:scale: 70 %
|
||||||
|
|
||||||
Pay attention that Neutron L2/L3 configuration on Settings tab does not have
|
Pay attention to that the Neutron L2/L3 configuration on the
|
||||||
effect in OpenStack cluster that uses NSXv. These settings contain settings
|
:guilabel:`Settings` tab does not have an effect in the OpenStack cluster
|
||||||
|
that uses NSXv. These settings contain the settings
|
||||||
for GRE tunneling which does not have an effect with NSXv.
|
for GRE tunneling which does not have an effect with NSXv.
|
||||||
|
|
||||||
Private network is not used
|
Private network is not used
|
||||||
---------------------------
|
---------------------------
|
||||||
|
|
||||||
It does not matter on which network interface you assign *Private* network
|
It does not matter on which network interface you assign the *Private* network
|
||||||
traffic, because it does not flow through controllers. Nevertheless IP range
|
traffic, because it does not flow through controllers. Nevertheless, an IP range
|
||||||
for *Private* network must be assigned.
|
for the *Private* network must be assigned.
|
||||||
|
|
||||||
OpenStack environment reset/deletion
|
OpenStack environment reset/deletion
|
||||||
------------------------------------
|
------------------------------------
|
||||||
|
|
||||||
Fuel NSXv plugin does not provide cleanup mechanism when OpenStack environment
|
The Fuel NSXv plugin does not provide a cleanup mechanism when an OpenStack
|
||||||
gets reset or deleted. All logical switches and edge virtual machines remain
|
environment is reset or deleted. All logical switches and the edge virtual
|
||||||
intact, it is up to operator to delete them and free resources.
|
machines remain intact, it is up to the operator to delete them and free
|
||||||
|
resources.
|
||||||
|
|
||||||
Ceph block storage is not supported
|
Ceph block storage is not supported
|
||||||
-----------------------------------
|
-----------------------------------
|
||||||
|
|
||||||
ESXi hypervisor do not have native support for mounting Ceph.
|
ESXi hypervisor does not have native support for mounting Ceph.
|
||||||
|
|
||||||
Sahara support
|
Sahara support
|
||||||
--------------
|
--------------
|
||||||
|
|
|
@ -4,35 +4,35 @@ Release notes
|
||||||
Release notes for Fuel NSXv plugin 3.0.0:
|
Release notes for Fuel NSXv plugin 3.0.0:
|
||||||
|
|
||||||
* Plugin is compatible with Fuel 9.0.
|
* Plugin is compatible with Fuel 9.0.
|
||||||
* Plugin settings were moved to Networks tab.
|
* Plugin settings were moved to the Networks tab.
|
||||||
* Roles that are not applicable to environment with the plugin are hidden.
|
* Roles that are not applicable to the environment with the plugin are hidden.
|
||||||
* Novas timeout of HTTP requests to Neutron was increased up to 900 seconds.
|
* Novas timeout of HTTP requests to Neutron was increased up to 900 seconds.
|
||||||
On big amount of requests neutron may be busy for a long period of time.
|
On a big amount of requests, neutron may be busy for a long period of time.
|
||||||
* User can assign nova-api-metadata to listen OpenStack public or management
|
* User can assign nova-api-metadata to listen OpenStack public or management
|
||||||
network.
|
network.
|
||||||
* LBaaS v2 support gets configured by default.
|
* LBaaS v2 support is configured by default.
|
||||||
* Troubleshooting section was added to plugin guide.
|
* Troubleshooting section was added to plugin guide.
|
||||||
* The plugin supports searching vcenter clusters name across a given data center
|
* The plugin supports searching vCenter clusters name across a given data center
|
||||||
hierarchy.
|
hierarchy.
|
||||||
* Add new parameters that were added to Neutron NSX plugin during Mitaka release.
|
* Added new parameters that were added to the Neutron NSX plugin during Mitaka release.
|
||||||
|
|
||||||
Release notes for Fuel NSXv plugin 2.0.0:
|
Release notes for Fuel NSXv plugin 2.0.0:
|
||||||
|
|
||||||
* Plugin is compatible with Fuel 8.0.
|
* Plugin is compatible with Fuel 8.0.
|
||||||
* Support for Neutron server Liberty release.
|
* Support for Neutron server Liberty release.
|
||||||
* Add new parameters that were added to Neutron NSX plugin during Liberty release.
|
* Added new parameters that were added to Neutron NSX plugin during Liberty release.
|
||||||
* Support of Fuel `component registry feature
|
* Support of Fuel `component registry feature
|
||||||
<https://blueprints.launchpad.net/fuel/+spec/component-registry>`_.
|
<https://blueprints.launchpad.net/fuel/+spec/component-registry>`_.
|
||||||
Plugin is shown as separate item at network step of cluster creation
|
Plugin is shown as separate item at network step of cluster creation
|
||||||
wizard.
|
wizard.
|
||||||
* Plugin no longer ships customized python-nova package. All needed
|
* Plugin no longer ships customized python-nova package. All needed
|
||||||
functionality for NSX support is available in python-nova Liberty package.
|
functionality for NSX support is available in python-nova Liberty package.
|
||||||
* Plugin installation process takes less time, because it does not need restart
|
* Plugin installation process takes less time, because it does not need to restart
|
||||||
docker containers.
|
docker containers.
|
||||||
* Setting 'Cluster ModRef IDs for OpenStack VMs' was removed.
|
* Setting 'Cluster MoRef IDs for OpenStack VMs' was removed.
|
||||||
Plugin automatically fetches cluster names that present on VMware tab and
|
Plugin automatically fetches cluster names that present on VMware tab and
|
||||||
queries vCenter to get MoRef ID. When new compute-vmware node is added and
|
queries vCenter to get MoRef ID. When a new compute-vmware node is added and
|
||||||
vSphere clusters gets assigned to it, plugin updates Neutron configuration
|
vSphere clusters gets assigned to it, the plugin updates the Neutron configuration
|
||||||
file and restarts it.
|
file and restarts it.
|
||||||
* Enable Neutron load balancer functionality and configure Horizon UI panel
|
* Enable Neutron load balancer functionality and configure Horizon UI panel
|
||||||
for LBaaS.
|
for LBaaS.
|
||||||
|
|
|
@ -7,10 +7,9 @@ Troubleshooting
|
||||||
Neutron NSX plugin issues
|
Neutron NSX plugin issues
|
||||||
-------------------------
|
-------------------------
|
||||||
|
|
||||||
Neutron NSX plugin does not have separate log file, its messages are logged by
|
The Neutron NSX plugin does not have a separate log file, its messages
|
||||||
neutron server. Default log file on OpenStack controllers for neutron server
|
are logged by the neutron server. The default log file on OpenStack controllers
|
||||||
is ``/var/log/neutron/server.log``
|
for neutron server is ``/var/log/neutron/server.log``
|
||||||
|
|
||||||
|
|
||||||
Inability to resolve NSX Manager hostname
|
Inability to resolve NSX Manager hostname
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
@ -23,14 +22,14 @@ If you see following message::
|
||||||
2016-02-19 ... ERROR neutron ServerNotFoundError: Unable to find the server at nsxmanager.mydom.org
|
2016-02-19 ... ERROR neutron ServerNotFoundError: Unable to find the server at nsxmanager.mydom.org
|
||||||
2016-02-19 ... ERROR neutron
|
2016-02-19 ... ERROR neutron
|
||||||
|
|
||||||
It means that controller cannot resolve NSX Manager hostname
|
It means that the controller cannot resolve the NSX Manager hostname
|
||||||
(``nsxmanager.mydom.org`` in this example) that is specified in config file.
|
(``nsxmanager.mydom.org`` in this example) that is specified in the
|
||||||
Check that DNS server IP addresses that you specified in *Host OS DNS Servers*
|
configuration file.
|
||||||
section of Fuel web UI are correct and reachable by all controllers (pay
|
Check that the DNS server IP addresses that you specified in the
|
||||||
attention that default route for controllers is *Public* network). Also verify
|
:guilabel:`Host OS DNS Servers` section of the Fuel web UI are correct
|
||||||
that host name that you entered is correct try to resolve it via ``host`` or
|
and reachable by all controllers; pay attention to that the default route
|
||||||
``dig`` programs.
|
for controllers is *Public* network. Also, verify that the host name that you
|
||||||
|
entered is correct by trying to resolve it via the ``host`` or ``dig`` programs.
|
||||||
|
|
||||||
SSL/TLS certificate problems
|
SSL/TLS certificate problems
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
@ -46,15 +45,14 @@ SSL/TLS certificate problems
|
||||||
2016-02-19 ... ERROR neutron SSLHandshakeError: [Errno 1]_ssl.c:510: error:
|
2016-02-19 ... ERROR neutron SSLHandshakeError: [Errno 1]_ssl.c:510: error:
|
||||||
14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
|
14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
|
||||||
|
|
||||||
This error indicates that you enabled SSL/TLS certificate verification, but
|
This error indicates that you enabled the SSL/TLS certificate verification, but
|
||||||
certificate verification failed during connection to NSX Manager. Possible
|
the certificate verification failed during connection to NSX Manager.
|
||||||
reasons of this:
|
The possible causes are:
|
||||||
|
|
||||||
#. NSX Manager certificate expired. Log into NSX Manager web GUI and check
|
#. NSX Manager certificate expired. Log into NSX Manager web GUI and check
|
||||||
certificate validation dates.
|
certificate validation dates.
|
||||||
#. Check certification authority (CA) certificate is still valid. CA
|
#. Check if the certification authority (CA) certificate is still valid.
|
||||||
certificate is specified by ``ca_file`` directive in ``nsx.ini``.
|
The CA certificate is specified by ``ca_file`` directive in ``nsx.ini``.
|
||||||
|
|
||||||
|
|
||||||
User access problems
|
User access problems
|
||||||
~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~
|
||||||
|
@ -77,12 +75,11 @@ Possible solutions:
|
||||||
* User account does not have sufficient privileges to perform certain
|
* User account does not have sufficient privileges to perform certain
|
||||||
operations.
|
operations.
|
||||||
|
|
||||||
|
|
||||||
Non-existent vCenter entity specified
|
Non-existent vCenter entity specified
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
If some settings of vCenter does not exist plugin will report following message
|
If some settings of vCenter do not exist, the plugin will report the following
|
||||||
with varying setting that is not found in vCenter:
|
message with varying settings is not found in vCenter:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
|
@ -93,11 +90,10 @@ with varying setting that is not found in vCenter:
|
||||||
Plugin: Configured datacenter_moid not found
|
Plugin: Configured datacenter_moid not found
|
||||||
2016-02-19 ... ERROR neutron
|
2016-02-19 ... ERROR neutron
|
||||||
|
|
||||||
|
|
||||||
Non-existent transport zone
|
Non-existent transport zone
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
If transport zone does not exist plugin will fail with following message:
|
If the transport zone does not exist, the plugin will fail with the following message:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
|
@ -109,7 +105,7 @@ If transport zone does not exist plugin will fail with following message:
|
||||||
2016-02-19 ... ERROR neutron NsxPluginException: An unexpected error occurred in the NSX
|
2016-02-19 ... ERROR neutron NsxPluginException: An unexpected error occurred in the NSX
|
||||||
Plugin: Configured vdn_scope_id not found
|
Plugin: Configured vdn_scope_id not found
|
||||||
|
|
||||||
You can get list of available transport zones via GET request to NSX Manager
|
You can get the list of available transport zones via GET request to NSX Manager
|
||||||
API URL ``https://nsx-manager.yourdomain.org/api/2.0/vdn/scopes``
|
API URL ``https://nsx-manager.yourdomain.org/api/2.0/vdn/scopes``
|
||||||
|
|
||||||
Neutron client returns 504 Gateway timeout
|
Neutron client returns 504 Gateway timeout
|
||||||
|
@ -124,30 +120,29 @@ Neutron client returns 504 Gateway timeout
|
||||||
</body></html>
|
</body></html>
|
||||||
|
|
||||||
This may signal that your NSX Manager or vCenter server are overloaded and
|
This may signal that your NSX Manager or vCenter server are overloaded and
|
||||||
cannot handle incoming requests in certain amount of time. Possible solution to
|
cannot handle the incoming requests in a certain amount of time. A possible
|
||||||
this problem might be increasing haproxy timeouts for nova API and neutron.
|
solution to this problem is to increase the haproxy timeouts for nova API and neutron.
|
||||||
Double values of following settings:
|
Double values of the following settings:
|
||||||
|
|
||||||
* timeout client
|
* timeout client
|
||||||
* timeout client-fin
|
* timeout client-fin
|
||||||
* timeout server
|
* timeout server
|
||||||
* timeout server-fin
|
* timeout server-fin
|
||||||
|
|
||||||
Edit configuration files (they are located in ``/etc/haproxy/conf.d``) and restart
|
Edit the configuration files in ``/etc/haproxy/conf.d`` and restart
|
||||||
haproxy on all controllers.
|
haproxy on all controllers.
|
||||||
|
|
||||||
|
|
||||||
NSX platform issues
|
NSX platform issues
|
||||||
-------------------
|
-------------------
|
||||||
|
|
||||||
Transport network connectivity
|
Transport network connectivity
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Before debugging problems of VM connectivity when they get spread across
|
Before debugging the problems of VM connectivity when they spread across
|
||||||
ESXi cluster hosts it is good to verify that transport (underlay) network
|
ESXi cluster hosts, verify that the transport (underlay) network
|
||||||
functions properly.
|
functions properly.
|
||||||
|
|
||||||
You can get list of vmknic adapters that are used for VXLAN tunnels with
|
You can get the list of vmknic adapters used for VXLAN tunnels with the
|
||||||
``esxcli`` command by providing DVS name. Then use one as output interface for
|
``esxcli`` command by providing DVS name. Then use one as output interface for
|
||||||
ping and try to reach another ESXi host.
|
ping and try to reach another ESXi host.
|
||||||
|
|
||||||
|
@ -158,33 +153,33 @@ ping and try to reach another ESXi host.
|
||||||
----------- -------------- ----------- ----------- ------- ----------- -------------
|
----------- -------------- ----------- ----------- ------- ----------- -------------
|
||||||
vmk1 50331670 33 0 0 172.16.0.91 255.255.255.0
|
vmk1 50331670 33 0 0 172.16.0.91 255.255.255.0
|
||||||
|
|
||||||
Provide ``++netstack=vxlan`` option to operate via VXLAN networking stack.
|
Provide the ``++netstack=vxlan`` option to operate via VXLAN networking stack.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
~ # ping ++netstack=vxlan -d -s 1550 -I vmk1 172.29.46.12
|
~ # ping ++netstack=vxlan -d -s 1550 -I vmk1 172.29.46.12
|
||||||
|
|
||||||
If host does not get respond try following options:
|
If the host does not respond, try following options:
|
||||||
|
|
||||||
* remove options ``-d`` (disable don't fragment bit) and ``-s`` (packet size)
|
* remove the options ``-d`` (disable don't fragment bit) and ``-s`` (packet size)
|
||||||
and try to ping. In this case ping will use 56 byte packets and if reply
|
and try to ping. In this case the ping will use 56 byte packets and if a reply
|
||||||
gets successfully delivered, consider revising MTU on network switches.
|
gets successfully delivered, consider revising MTU on the network switches.
|
||||||
* if ping with smaller packets also fails, consider uplink interface
|
* if the ping with smaller packets also fails, consider uplink interface
|
||||||
configuration (e.g. VLAN ID).
|
configuration (e.g. VLAN ID).
|
||||||
|
|
||||||
|
|
||||||
Verify NSX controllers state
|
Verify NSX controllers state
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
NSX controllers must form cluster majority
|
NSX controllers must form cluster majority.
|
||||||
|
|
||||||
You can verify NSX controllers cluster state in web UI (``Network & Security ->
|
You can verify NSX controllers cluster state in the Fuel web UI at
|
||||||
Installation -> Management``). All of them must be in normal status.
|
:guilabel:`Network & Security` -> :guilabel:`Installation -> Management`.
|
||||||
|
All of them must be in normal status.
|
||||||
|
|
||||||
Verify ESXi hosts connectivity with NSX controllers
|
Verify ESXi hosts connectivity with NSX controllers
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Check that each ESXi host established connection with NSX controllers
|
Check that each ESXi host established connection with NSX controllers:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
|
@ -192,14 +187,14 @@ Check that each ESXi host established connection with NSX controllers
|
||||||
tcp 0 0 172.16.0.252:51916 192.168.130.101:1234
|
tcp 0 0 172.16.0.252:51916 192.168.130.101:1234
|
||||||
ESTABLISHED 77203 newreno netcpa-worker
|
ESTABLISHED 77203 newreno netcpa-worker
|
||||||
|
|
||||||
Check that all connections are in ESTABLISHED state. If connection is not
|
Check that all connections are in the ESTABLISHED state. If connection is not
|
||||||
established:
|
established:
|
||||||
|
|
||||||
* Check that ESXi host can reach NSX controller.
|
* Check that the ESXi host can reach NSX controller.
|
||||||
* Check that firewall between ESXi host and NSX controller.
|
* Check that the firewall is between ESXi host and NSX controller.
|
||||||
* Check that netcp agent (process that is responsible for communication between
|
* Check that netcp agent (process that is responsible for communication between
|
||||||
ESXi and NSX controller) is running: ``/etc/init.d/netcpad status``. If it is
|
ESXi and NSX controller) is running: ``/etc/init.d/netcpad status``. If it is
|
||||||
not running try to start it and check that it is running:
|
not running, try starting it and check that it is running:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
|
@ -207,7 +202,7 @@ established:
|
||||||
~ # /etc/init.d/netcpad status
|
~ # /etc/init.d/netcpad status
|
||||||
netCP agent service is running
|
netCP agent service is running
|
||||||
|
|
||||||
Verify that Control Plane is Enabled and connection is up::
|
Verify that Control Plane is Enabled and the connection is up::
|
||||||
|
|
||||||
~ # esxcli network vswitch dvs vmware vxlan network list --vds-name computeDVS
|
~ # esxcli network vswitch dvs vmware vxlan network list --vds-name computeDVS
|
||||||
VXLAN ID Multicast IP Control Plane
|
VXLAN ID Multicast IP Control Plane
|
||||||
|
@ -220,6 +215,6 @@ Verify that Control Plane is Enabled and connection is up::
|
||||||
vSphere/NSX infrastructure is not running after power outage
|
vSphere/NSX infrastructure is not running after power outage
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
vCenter and NSX management VMs must be started in certain order.
|
vCenter and NSX management VMs must be started in a certain order.
|
||||||
Please see `VMware KB article
|
Please see `VMware KB article
|
||||||
<https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2139067>`_.
|
<https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2139067>`_.
|
||||||
|
|
|
@ -1,8 +1,8 @@
|
||||||
Usage
|
Usage
|
||||||
=====
|
=====
|
||||||
|
|
||||||
Easiest way to check that plugin works as expected would be trying to create
|
The easiest way to check that the plugin works as expected is to create a
|
||||||
network or router using ``neutron`` command line client:
|
network or router using the ``neutron`` command-line tool:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
|
@ -10,20 +10,19 @@ network or router using ``neutron`` command line client:
|
||||||
root@node-4:~# . openrc
|
root@node-4:~# . openrc
|
||||||
root@node-4:~# neutron router-create r1
|
root@node-4:~# neutron router-create r1
|
||||||
|
|
||||||
You can monitor plugin actions in ``/var/log/neutron/server.log`` and see how
|
You can monitor the plugin actions in ``/var/log/neutron/server.log`` and see
|
||||||
edges appear in list of ``Networking & Security -> NSX Edges`` pane in vSphere
|
how edges appear in the list of the ``Networking & Security -> NSX Edges``
|
||||||
Web Client. If you see error messages check :ref:`Troubleshooting
|
pane in vSphere Web Client. If you see error messages, check the
|
||||||
<troubleshooting>` section.
|
:ref:`Troubleshooting <troubleshooting>` section.
|
||||||
|
|
||||||
|
|
||||||
VXLAN MTU considerations
|
VXLAN MTU considerations
|
||||||
------------------------
|
------------------------
|
||||||
|
|
||||||
The VXLAN protocol is used for L2 logical switching across ESXi hosts. VXLAN
|
The VXLAN protocol is used for L2 logical switching across ESXi hosts. VXLAN
|
||||||
adds additional data to the packet, please consider to increase MTU size on
|
adds additional data to the packet. Consider increasing the MTU size on the
|
||||||
network equipment that is connected to ESXi hosts.
|
network equipment connected to ESXi hosts.
|
||||||
|
|
||||||
Consider following calculation when settings MTU size:
|
Consider the following calculation when settings the MTU size:
|
||||||
|
|
||||||
Outer IPv4 header == 20 bytes
|
Outer IPv4 header == 20 bytes
|
||||||
|
|
||||||
|
@ -33,63 +32,61 @@ VXLAN header == 8 bytes
|
||||||
|
|
||||||
Inner Ethernet frame == 1518 (14 bytes header, 4 bytes 802.1q header, 1500 Payload)
|
Inner Ethernet frame == 1518 (14 bytes header, 4 bytes 802.1q header, 1500 Payload)
|
||||||
|
|
||||||
Summarizing all of these we get 1554 bytes. Consider increasing MTU on network
|
Summarizing all of these we get 1554 bytes. Consider increasing MTU on the
|
||||||
hardware up to 1600 bytes (default MTU value when you are configuring VXLAN on
|
network hardware up to 1600 bytes, which is the default MTU value when you
|
||||||
ESXi hosts during *Host Preparation* step).
|
configure VXLAN on ESXi hosts at the *Host Preparation* step.
|
||||||
|
|
||||||
To configure the jumbo frame please look recommendations from here:
|
To configure the jumbo frame, check the recommendations from:
|
||||||
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2093324
|
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2093324
|
||||||
|
|
||||||
Instances usage notes
|
Instances usage notes
|
||||||
---------------------
|
---------------------
|
||||||
|
|
||||||
Instances that you run in OpenStack cluster with vCenter and NSXv must have
|
Instances that you run in an OpenStack cluster with vCenter and NSXv must have
|
||||||
VMware Tools installed, otherwise there will be no connectivity and security
|
VMware Tools installed, otherwise there will be no connectivity and security
|
||||||
groups functionality.
|
groups functionality.
|
||||||
|
|
||||||
|
|
||||||
Neutron usage notes
|
Neutron usage notes
|
||||||
-------------------
|
-------------------
|
||||||
|
|
||||||
|
The only way to create a distributed router is to use the Neutron CLI tool:
|
||||||
The only way to create distributed router is to use neutron CLI tool:
|
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
$ neutron router-create dvr --distributed True
|
$ neutron router-create dvr --distributed True
|
||||||
|
|
||||||
Creation of exclusive tenant router is not supported in OpenStack dashboard
|
The creation of an exclusive tenant router is not supported in the OpenStack
|
||||||
(Horizon). You can create exclusive router using Neutron CLI tool:
|
dashboard (Horizon). You can create an exclusive router using Neutron CLI tool:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
$ neutron router-create DbTierRouter-exclusive --router_type exclusive
|
$ neutron router-create DbTierRouter-exclusive --router_type exclusive
|
||||||
|
|
||||||
During creation of external network for tenants you must specify physical
|
During the creation of an external network for tenants, you must specify
|
||||||
network (``--provider:physical_network`` parameter) that will be used to carry
|
a physical network (the ``--provider:physical_network`` parameter) that
|
||||||
VM traffic into physical network segment. For Neutron with NSX plugin this
|
will be used to carry the VM traffic into the physical network segment.
|
||||||
parameter must be set to MoRef ID of portgroup which provides connectivity with
|
For Neutron with the NSX plugin, this parameter must be set to MoRef ID of
|
||||||
physical network to NSX edge nodes.
|
the portgroup which provides connectivity with the physical network to the
|
||||||
|
NSX edge nodes.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
$ neutron net-create External --router:external --provider:physical_network network-222
|
$ neutron net-create External --router:external --provider:physical_network network-222
|
||||||
|
|
||||||
|
|
||||||
Loadbalancer as a service support
|
Loadbalancer as a service support
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Starting from version 2.0.0 plugin enables Neutron load balancing functionality
|
Starting from version 2.0.0, the plugin enables the Neutron load balancing
|
||||||
and enables it in OpenStack dashboard. By default Neutron NSX plugin gets
|
functionality and enables it in the OpenStack dashboard. By default, the
|
||||||
configured with LBaaSv2 support.
|
Neutron NSX plugin is configured with LBaaSv2 support.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
Load balancing functionality requires attachment of an **exclusive** or
|
The load balancing functionality requires attachment of an **exclusive** or
|
||||||
**distributed** router to the subnet prior to provisioning of an load
|
**distributed** router to the subnet prior to the provisioning of a load
|
||||||
balancer.
|
balancer.
|
||||||
|
|
||||||
Create exclusive or distributed router and connect it to subnet.
|
Create an exclusive or distributed router and connect it to subnet.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
|
@ -105,13 +102,14 @@ Create servers and permit HTTP traffic.
|
||||||
$ neutron security-group-rule-create --protocol tcp --port-range-min 80 \
|
$ neutron security-group-rule-create --protocol tcp --port-range-min 80 \
|
||||||
--port-range-max 80 default
|
--port-range-max 80 default
|
||||||
|
|
||||||
Create loadbalancer, specify name and subnet where you want to balance traffic.
|
Create a loadbalancer, specify a name and a subnet where you want to balance
|
||||||
|
the traffic.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
$ neutron lbaas-loadbalancer-create --name lb-www www-subnet
|
$ neutron lbaas-loadbalancer-create --name lb-www www-subnet
|
||||||
|
|
||||||
Create listener.
|
Create a listener.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
|
@ -125,7 +123,7 @@ Create a load balancer pool.
|
||||||
$ neutron lbaas-pool-create --lb-method ROUND_ROBIN --listener www-listener \
|
$ neutron lbaas-pool-create --lb-method ROUND_ROBIN --listener www-listener \
|
||||||
--protocol HTTP --name www-pool
|
--protocol HTTP --name www-pool
|
||||||
|
|
||||||
Find out IP addresses of your VMs and create members in pool.
|
Find out the IP addresses of your VMs and create members in pool.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
|
@ -139,7 +137,7 @@ Create a virtual IP address.
|
||||||
$ neutron lb-vip-create --name lb_vip --subnet-id <private-subnet-id> \
|
$ neutron lb-vip-create --name lb_vip --subnet-id <private-subnet-id> \
|
||||||
--protocol-port 80 --protocol HTTP http-pool
|
--protocol-port 80 --protocol HTTP http-pool
|
||||||
|
|
||||||
Allocate floating IP and associate it with VIP.
|
Allocate the floating IP and associate it with VIP.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue