Contrail 4.0-4.0.0-1 Plugin Guide review

Change-Id: I4f60cf1ffedfad7c804d4e45807a1a0e6db0c745
Signed-off-by: Illia Polliul <ipolliul@mirantis.com>
This commit is contained in:
Alexander Adamov
2016-06-22 16:31:09 +03:00
committed by Illia Polliul
parent ff119c8374
commit a87085ef13
13 changed files with 378 additions and 245 deletions

View File

@@ -30,8 +30,8 @@ master_doc = 'index'
project = u'Contrail plugin for Fuel'
copyright = u'2015, Mirantis Inc.'
version = '4.0'
release = '4.0-4.0.0'
version = '4.0-4.0.0-1'
release = '4.0-4.0.0-1'
pygments_style = 'sphinx'

View File

@@ -1,9 +1,9 @@
Contrail TSN (experimental)
==========================
===========================
What is TSN
-----------
TSN Description
---------------
Contrail supports extending a cluster to include baremetal servers and other
virtual instances connected to a TOR switch supporting OVSDB protocol.
@@ -12,7 +12,7 @@ of any of the virtual networks configured in the contrail cluster, facilitating
communication between them and the virtual instances running in the cluster.
Contrail policy configurations can be used to control this communication.
The solution is achieved by using OVSDB protocol to configure the TOR switch and
The solution is achieved by using the OVSDB protocol to configure the TOR switch and
to import dynamically learnt addresses from it. VXLAN encapsulation will be used
in the data plane communication with the TOR switch.
@@ -30,23 +30,26 @@ The TSN can also act as the DHCP server for the bare metal servers or virtual in
leasing IP addresses to them, along with other DHCP options configured in the system.
The TSN also provides a DNS service for the bare metal servers.
For more details on this feature, refer to `Contrail Wiki <https://github.com/Juniper/contrail-controller/wiki/Baremetal-Support>`_
.. seealso::
`Contrail Wiki <https://github.com/Juniper/contrail-controller/wiki/Baremetal-Support>`_
Prerequisites
-------------
This guide assumes that you have `installed Fuel <https://docs.mirantis.com/openstack/fuel/fuel-8.0/pdf/Fuel-8.0-UserGuide.pdf>`_
This guide assumes that you have installed
`Fuel <http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide.html>`_
and all the nodes of your future environment are discovered and functional.
To configure TSN in you environment you need to perform steps additional to :doc:`/install_guide`
To configure TSN in you environment, you need to perform steps additional to :doc:`/install_guide`
To configure TSN in your network you will need TOR switch.
To configure TSN in your network, you need TOR switch.
.. raw:: latex
\clearpage
Configuring TSN
---------------
Configure TSN
-------------
#. Enable ToR Agents
@@ -72,4 +75,4 @@ Configuring TSN
tor_device_name: ovs2
tor_vendor_name: ovs
#. Verify working TSN by going to contrail WebUI
#. Verify working TSN by going to Contrail web UI

View File

@@ -1,18 +1,21 @@
Basic Contrail Operations
=========================
Use Contrail
============
This document describes very basic operations with Contrail UI.
For detailed information on Contrail operations, please refer to official `Juniper documentation
<http://www.juniper.net/techpubs/en_US/contrail2.0/information-products/pathway-pages/getting-started.html#configuration>`_.
.. seealso::
`Juniper documentation
<http://www.juniper.net/techpubs/en_US/contrail2.0/information-products/pathway-pages/getting-started.html#configuration>`_.
.. raw:: latex
\pagebreak
Logging in
----------
Log into Contrail
-----------------
Log into Contrail UI using the OpenStack admin credentials.
To log into Contrail web UI, use the OpenStack admin credentials.
.. image:: images/contrail-login.png
@@ -20,28 +23,31 @@ Log into Contrail UI using the OpenStack admin credentials.
\pagebreak
Checking services status
------------------------
Verify services status
----------------------
Verify the status of Contrail Control Analytics and Config nodes along with vRouters in *Infrastructure* using *Dashboard*
tab of the left-hand *Monitor* menu.
Verify the status of Contrail Control Analytics and Config nodes along
with vRouters in :guilabel:`Infrastructure` using :guilabel:`Dashboard`
tab of the left-hand :guilabel:`Monitor` menu.
.. image:: images/contrail-services.png
Creating the virtual networks
-----------------------------
Create the virtual networks
---------------------------
To create the virtual networks:
* Open left-hand *Configure* menu and click *Networking* option. Enter *Networks* tab and use “+” sign at the right
side to create a new virtual network. Enter the network name and add an IP subnet. Gateway address will be added automatically.
* Open left-hand *Configure* menu and click *Networking* option. Enter :guilabel:`Networks` tab
and use ``+`` sign at the right side to create a new virtual network.
Enter the network name and add an IP subnet. Gateway address will be added automatically.
.. image:: images/contrail-create-net.png
* To create an external network, you need to add ``Shared`` and ``External`` flags to the created network using
the ``Advanced Options`` sections and provide a proper Routing mark in Route Targets section to let this network to be
announced to the public routing table.
The Routing mark is two numbers divided by a semicolon, e.g. 64512:10000.
* To create an external network, you need to add ``Shared`` and ``External`` flags to the
created network using the ``Advanced Options`` sections and provide a proper Routing mark
in Route Targets section to let this network to be announced to the public routing table.
The Routing mark is two numbers divided by a semicolon, for example ``64512:10000``.
.. image:: images/contrail-create-ext-net.png

View File

@@ -9,56 +9,66 @@ interface controller drivers for fast packet processing. The DPDK provides a pro
framework for Intel x86 processors and enables faster development of high-speed
data packet networking applications.
By default, contrail virtual router (vrouter) is running as a kernel module on Linux.
By default, Contrail virtual router (vRouter) is running as a kernel module on Linux.
.. image:: images/vrouter_in_kernelspace.png
The vrouter module is able to fill a 10G link with TCP traffic from a virtual
The vRouter module can fill a 10G link with TCP traffic from a virtual
machine (VM) on one server to a VM on another server without making any
assumptions about hardware capabilities in the server NICs. Also, in order to
support interoperability and use a standards-based approach, vrouter does not
use new protocols/encapsulations. However, in network function virtualization
assumptions about hardware capabilities in the server NICs. Also, to
support interoperability and use a standards-based approach, vRouter does not
use new protocols and encapsulations. However, in network function virtualization
(NFV) scenarios, other performance metrics such as packets-per-second (pps) and
latency are as important as TCP bandwidth. With a kernel module, the pps number
is limited by various factors such as the number of VM exits, memory copies and
is limited by various factors such as the number of VM exits, memory copies, and
the overhead of processing interrupts.
In order to optimize performance for NFV use cases, vrouter can be integrated with the Intel DPDK (Data Plane Development Kit). To integrate with DPDK, the vrouter can now run in a user process instead of a kernel module.
To optimize performance for NFV use cases, vRouter can be integrated with the Intel DPDK
(Data Plane Development Kit). To integrate with DPDK, vRouter can now run as a user process
instead of a kernel module.
.. image:: images/vrouter_in_userspace.png
This process links with the DPDK libraries and communicates with the vrouter host agent, which runs as a separate process. The application inside the guest VM can be written to use the DPDK API or it can use the traditional socket API. However, for NFV applications such as vMX, which require high performance, it would be preferable to use the DPDK API inside the VM.
This process links with the DPDK libraries and communicates with the vRrouter host agent,
which runs as a separate process. You can write an application inside of the guest VM to
use the DPDK API or you can use the traditional socket API. However, for NFV applications
such as vMX, which require high performance, using the DPDK API inside the VM is preferable.
Prerequisites
-------------
- Installed `Fuel 8.0 <https://docs.mirantis.com/openstack/fuel/fuel-8.0/>`_
- Installed contrail plugin :doc:`/install_guide`
- Environment must be created with "KVM" for compute virtualization and "Neutron with tunneling segmentation" for networking
- Network card must support DPDK. List of compatible adapters can be found on `DPDK website <http://dpdk.org/doc/guides/nics/index.html>`_
- Installed Contrail plugin :doc:`/install_guide`
- Environment must support ``KVM`` for compute virtualization and ``Neutron with tunneling segmentation`` for networking
- Network card must support DPDK. List of compatible adapters can be found on the
`DPDK website <http://dpdk.org/doc/guides/nics/index.html>`_
Restrictions
------------
- Only compute hosts can be configured with DPDK role. "DPDK role" is just a mark that enables DPDK feature on certain compute. If you try to use it with other roles it wouldn't have any effect.
* Only compute hosts can be configured with DPDK role. ``DPDK`` role is just a mark that enables DPDK
feature on a certain compute node. If you try to use ``DPDK`` role with other roles, ``DPDK`` role
won't have any effect.
- Contrail DPDK feature doesn't work with qemu virtualization as far as with nested KVM. This means that for current release DPDK-based vRouter works only on baremetal computes.
* Contrail DPDK feature doesn't work with qemu virtualization as far as with nested KVM.
This means that for current release DPDK-based vRouter works only on baremetal computes.
- Contrail DPDK vrouter permanently uses 1GB of hugepages, therefore, it is necessary to allocate enough amount of hugepages to run DPDK vrouter and VM's(with DPDK) respectively.
* Contrail DPDK vRouter permanently uses 1GB of hugepages. Therefore, you need to allocate enough
amount of hugepages to run vRouter and VMs with DPDK.
.. raw:: latex
\clearpage
Configuration
-------------
Configure DPDK
--------------
To enable DPDK you should proceed with following steps:
To configure DPDK you should proceed with the following steps:
#. Enable contrail plugin in Fuel UI settings
#. Enable the Contrail plugin in Fuel web UI settings
.. image:: images/enable_contrail_plugin.png
@@ -66,32 +76,36 @@ To enable DPDK you should proceed with following steps:
\pagebreak
2. Enable DPDK on Fuel UI
2. Enable DPDK on Fuel web UI
.. image:: images/enable_contrail_dpdk.png
#. Choose the size and amount of huge pages to allocate. They will be used for
both vRouter process and VMs backing. 2MB sized huge pages can be added on-fly,
1GB sized require a reboot. Also, it is necessary to leave some amount of memory
1GB sized require a reboot. Also, leave some amount of memory
for the operating system itself.
.. raw:: latex
\pagebreak
4. Add DPDK role on computes where you want to have DPDK-based vRouter.
**Computes that are not marked with DPDK role will use kernel-based vRouter.**
4. Add ``DPDK`` role on computes where you want to have DPDK-based vRouter.
.. image:: images/add_dpdk_role.png
.. note::
Computes that are not marked with DPDK role will use kernel-based vRouter.
.. image:: images/add_dpdk_role.png
#. Deploy environment
.. warning::
Computes with DPDK-based vRouter require flavor with HugePages enabled.
**Instances with usual flavours can't be launched on DPDK-enabled hosts.**
Computes with DPDK-based vRouter require flavor with Huge Pages enabled.
Instances with usual flavours can't be launched on DPDK-enabled hosts.
If DPDK is enabled in plugin settings Fuel will create one flavor that will have hugepages support, named "m1.small.hpgs".
One can create custom flavor with following steps on controller node::
If DPDK is enabled in plugin settings, Fuel will create one flavor that will
have Huge Pages support, named ``m1.small.hpgs``.
To create a custom flavor, follow the steps below on the controller node::
# . openrc
# nova flavor-create m2.small.hpgs auto 2000 20 2
@@ -102,23 +116,29 @@ To enable DPDK you should proceed with following steps:
\clearpage
Verification
------------
Verify DPDK
-----------
After deploy finishes, you can verify your installation. First, proceed with basic checks.
To verify your installation, proceed with basic checks below:
#. Check that Contrail services and DPDK vrouter are running on compute node::
#. Verify that Contrail services and DPDK vRouter are running on a compute node::
contrail-status
**System response**::
root@node-37:~# contrail-status
== Contrail vRouter ==
supervisor-vrouter: active
contrail-vrouter-agent active
contrail-vrouter-dpdk active
contrail-vrouter-nodemgr active
#. Check if DPDK vrouter catch interface::
#. Verify if DPDK vRouter binds network interfaces::
/opt/contrail/bin/dpdk_nic_bind.py -s
**Example of system response**::
root@node-37:~# /opt/contrail/bin/dpdk_nic_bind.py -s
Network devices using DPDK-compatible driver
============================================
0000:06:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
@@ -133,9 +153,12 @@ After deploy finishes, you can verify your installation. First, proceed with bas
=====================
<none>
#. Check if vrouter use hugepages::
#. Verify if vRrouter uses Huge Pages::
grep Huge /proc/meminfo
**Example of system response**::
root@node-37:~# grep Huge /proc/meminfo
AnonHugePages: 0 kB
HugePages_Total: 30000
HugePages_Free: 29488
@@ -144,15 +167,17 @@ After deploy finishes, you can verify your installation. First, proceed with bas
Hugepagesize: 2048 kB
#. Check if vrouter utilize CPU:
#. Verify if vRouter uses CPU:
.. image:: images/vrouter_utilize_cpu.png
#. Check if vrouter create interface after creation VM::
#. Verify if vRouter creates interface after creation of a virtual machine::
vif --list
**Example of system response**::
root@node-41:~# vif --list
Vrouter Interface Table
Flags: P=Policy, X=Cross Connect, S=Service Chain, Mr=Receive Mirror
Mt=Transmit Mirror, Tc=Transmit Checksum Offload, L3=Layer 3, L2=Layer 2
@@ -202,40 +227,67 @@ After deploy finishes, you can verify your installation. First, proceed with bas
TX port packets:15 errors:0
DPDK related options
--------------------
Change DPDK options
-------------------
In this chapter described DPDK related options that you can change from Fuel UI:
This chapter describes how to change DPDK related options from Fuel web UI:
- *"Enable DPDK feature for this environment."* - this option enable DPDK globally, remember that you anyway must use "DPDK" role to mark compute where you want to have DPDK
- *"Hugepage size"* - Choose the size of huge pages that will be used for a dpdk feature. Check if 1GB pages are supported on the target compute node. # grep pdpe1gb /proc/cpuinfo | uniq
- *"Hugepages amount (%)"* - set amount of memory allocated on each compute node for huge pages. It will use % of all memory available on compute. Remember that DPDK vrouter permanently use 1GB of huge pages and other applications running on compute node may not support huge pages, so this parameter should be used carefully.
- *"CPU pinning"* - this hexadecimal value describes how many and which exact processors will be used by dpdk-vrouter. CPU pinning is implemented using `taskset util <http://www.linuxcommand.org/man_pages/taskset1.html>`_
- *"Patch Nova"* - current release (7.0) of MOS nova doesn't have support for DPDK-based vRouter. In future, necessary patches will be included in MOS maintenance updates.
- *"Install Qemu and Libvirt from Contrail"* - DPDK-based vRouter needs huge pages memory-backing for guests. MOS 7.0 ships with qemu and libvirt that don't support it. This is needed only for DPDK feature and will be implemented only on nodes where we have "DPDK" role.
* :guilabel:`Enable DPDK feature for this environment` - this option enables DPDK globally.
Still you must use ``DPDK`` role to mark a compute node with DPDK
How to change huge pages settings after deployment
--------------------------------------------------
* :guilabel:`Hugepage size` - specifies the size of huge pages that will be used for
a dpdk feature. Verify if 1GB pages are supported on the target compute node::
After deploy is finished, plugin settigs are locked in Fuel UI. Therefore, huge pages size/ammount cannot be changed
by plug-in, and should be changed manually on each compute node. Here are the nessesary steps:
grep pdpe1gb /proc/cpuinfo | uniq
**2MB-sized huge pages** are set with sysctl and can be added on fly with this command::
* :guilabel:`Hugepages amount (%)` - sets amount of memory allocated on each compute node for huge pages.
It will use ``%`` of all memory available on a compute node. DPDK vRouter permanently uses 1GB of huge pages
and other applications running on compute node may not support huge pages.
Therefore, use this parameter carefully.
# sysctl -w vm.nr_hugepages=<number of pages>
* :guilabel:`CPU pinning` - this hexadecimal value describes how many and which exact processors
``dpdk-vrouter`` will use. CPU pinning is implemented using
`taskset util <http://www.linuxcommand.org/man_pages/taskset1.html>`_
Here number of huge pages can be calculated from ammount that you want to be set, for example 20GB = 20 * 1024 / 2 = 10240 pieces.
Then edit the /etc/sysctl.conf file to make these changes persistent over reboots.
* :guilabel:`Patch Nova` - in the MOS 8.0 release nova doesn't support DPDK-based vRouter.
In future, MOS maintenance updates will include necessary patches.
**1GB-sized huge pages** are set through the kernel parameter and require a reboot to take effect.
Kernel versions supplied with Ubuntu 14.04 don't support on fly allocation for 1GB-sized huge pages.
First, edit the /etc/default/grub file and set needed amount of huge pages.
Here is the example for GRUB_CMDLINE_LINUX in /etc/default/grub::
* :guilabel:`Install Qemu and Libvirt from Contrail` - DPDK-based vRouter needs
huge pages memory-backing for guests.
MOS 8.0 includes qemu and libvirt that don't support huge pages memory-backing.
DPDK feature needs qemu and libvirt from Contrail only on nodes with ``DPDK`` role.
Change Huge Pages settings after deployment
-------------------------------------------
After deploy is finished, plugin settigs are locked in Fuel web UI.
Therefore, size and ammount of huge pages cannot be changed
by the plugin.
You need to set Huge Pages settings manually on each compute node.
To set 2MB-sized huge pages:
#. Calculate the number of huge pages based on the ammount you need.
For example 20GB = 20 * 1024 / 2 = 10240 pages.
#. Set 2MB-sized huge pages::
sysctl -w vm.nr_hugepages=<number of pages>
#. Edit the ``/etc/sysctl.conf`` file to make these changes persistent over reboots.
On the contrary to setting 2MB-sized huge pages, you can set 1GB-sized huge pages
through the kernel parameter only, which requires a reboot to take effect.
Kernel versions supplied with Ubuntu 14.04 don't support on the fly allocation for 1GB-sized huge pages.
To set 1GB-sized huge pages:
#. Edit the ``/etc/default/grub`` file and set needed amount of huge pages.
For GRUB_CMDLINE_LINUX in ``/etc/default/grub``::
GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX hugepagesz=1024M hugepages=20
Then update the bootloader and reboot for these parameters to take effect::
#. Update the bootloader and reboot for these parameters to take effect::
# update-grub
# reboot

View File

@@ -6,15 +6,16 @@ Description
This guide describes how to run DPDK-based vRouter on virtual functions (VF).
DPDK on VF depends on :doc:`/enable_sriov` and :doc:`/dpdk` features.
This feature is aimed to share physical interface for DPDK and SR-IOV usage.
This feature shares a physical interface for DPDK and SR-IOV usage.
Prerequisites
-------------
- Installed `Fuel 8.0 <https://docs.mirantis.com/openstack/fuel/fuel-8.0/quickstart-guide.html#introduction>`_
- Installed `Fuel 8.0 <http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide.html>`_
- Installed Fuel Contrail Plugin :doc:`/install_guide`
- Environment must be created with "KVM" for compute virtualization and "Contrail" for networking
- Network card must support DPDK. List of compatible adapters can be found on `DPDK website <http://dpdk.org/doc/guides/nics/index.html>`_
- Network card must support DPDK.
List of compatible adapters can be found on `DPDK website <http://dpdk.org/doc/guides/nics/index.html>`_
- Network card must support SRIOV.
How to enable DPDK on VF
@@ -26,15 +27,17 @@ How to enable DPDK on VF
.. image:: images/enable_dpdk_on_vf.png
#. Assign Compute, DPDK and SRIOV role to the host where you want to enable DPDK on VF feature:
#. Assign ``Compute``, ``DPDK``, and ``SRIOV`` roles to the host where you want to enable DPDK on VF feature:
.. image:: images/compute_dpdk_sriov_roles.png
#. Deploy environment
If DPDK on VF is enabled in plug-in settings, it will be deployed on computes with 'DPDK' and 'SR-IOV' roles.
If DPDK on VF is enabled in plugin settings, it will be deployed on computes with ``DPDK``
and ``SR-IOV`` roles.
During deploy following configurations will be made on compute nodes with DPDK and SR-IOV roles:
#. Virtual functions will be allocated on private interface.
#. First VF(dpdk-vf0) will be used for DPDK-based vRouter.
#. Rest of the VFs will be added to pci_passthrough_whitelist setting in nova.conf for SR-IOV usage.
#. First VF (dpdk-vf0) will be used for DPDK-based vRouter.
#. Rest of the VFs will be added to ``pci_passthrough_whitelist`` setting in ``nova.conf``
for SR-IOV usage.

View File

@@ -1,38 +1,50 @@
SR-IOV
======
Enable SR-IOV
=============
Prerequisites
-------------
This guide assumes that you have `installed Fuel <https://docs.mirantis.com/openstack/fuel/fuel-8.0/>`_
and performed steps 5.3.1 - 5.3.9 from :doc:`/install_guide`.
To enable SR-IOV you need sriov capable network PCI card. Also, it is important to remember
that only compute hosts can be configured with sriov role.
To enable SR-IOV, you need a SRIOV-capable network PCI card. Also, only compute hosts can be configured
with the``SRIOV`` role.
Features
--------
#. You can have multple VLANs inside one physical network
#. When using Passthrough (as in sriov scenario), there are no dhcp and metadata provided over openstack. You have to manage that manually or provide additional network port with usual openstack network.
#. When using ``Passthrough``, as in SR-IOV scenario, OpenStack does not provides dhcp and metadata.
You have to manage that manually or provide an additional network port with a regular OpenStack network.
What is SR-IOV
--------------
SR-IOV Description
------------------
Quoting `Mirantis blog post: <https://www.mirantis.com/blog/carrier-grade-mirantis-openstack-the-mirantis-nfv-initiative-part-1-single-root-io-virtualization-sr-iov/>`_
SR-IOV is a PCI Special Interest Group (PCI-SIG) specification for virtualizing network interfaces, representing each physical resource as a configurable entity (called a PF for Physical Function), and creating multiple virtual interfaces (VFs or Virtual Functions) with limited configurability on top of it, recruiting support for doing so from the system BIOS, and conventionally, also from the host OS or hypervisor. Among other benefits, SR-IOV makes it possible to run a very large number of network-traffic-handling VMs per compute without increasing the number of physical NICs/ports and provides means for pushing processing for this down into the hardware layer, off-loading the hypervisor and significantly improving both throughput and deterministic network performance.
SR-IOV is a PCI Special Interest Group (PCI-SIG) specification for virtualizing network interfaces,
representing each physical resource as a configurable entity (called a PF for Physical Function),
and creating multiple virtual interfaces (VFs or Virtual Functions) with limited configurability on top of it,
recruiting support for doing so from the system BIOS, and conventionally, also from the host OS or hypervisor.
Among other benefits, SR-IOV makes it possible to run a very large number of network-traffic-handling VMs per
compute without increasing the number of physical NICs/ports and provides means for pushing processing for
this down into the hardware layer, off-loading the hypervisor and significantly improving both throughput
and deterministic network performance.
How to check if network interface is sriov capable, and how many VFs are available/enabled
------------------------------------------------------------------------------------------
Issue following command on boostraped host::
Verify SR-IOV environment
-------------------------
To verify if network interface is SRIOV-capable and how many VFs are available,
run the following command on the boostraped host::
lspci -s <bus ID> -vvv
How to enable SR-IOV in fuel
----------------------------
Enable SR-IOV in Fuel
---------------------
#. Enable SR-IOV in plugin settings and configure unique physnet name.
To enable SR-IOV in Fuel:
#. Enable SR-IOV in plugin settings and configure a unique physnet name.
.. image:: images/enable_sriov_settings.png
@@ -40,26 +52,31 @@ How to enable SR-IOV in fuel
\pagebreak
2. Assign SR-IOV role to compute hosts.
2. Assign a ``SRIOV`` role to compute hosts.
.. image:: images/enable_sriov_role_node.png
.. note::
SR-IOV will be enabled on all SR-IOV capable interfaces, not assigned
to the Fuel bridges - the networks in Fuel web UI.
.. raw:: latex
\pagebreak
3. **SR-IOV will be enabled on all SR-IOV capable interfaces, not assigned
to Fuel bridges(networks in Fuel UI).**
List of interfaces can be modified manually after deployment.
3. You can modify the list of interfaces manually after deployment.
.. image:: images/sriov_interfaces.png
#. Perform deploy as in 5.3.10 :doc:`/install_guide`
#. Deploy as in 5.3.10 :doc:`/install_guide`
How to create VM with sriov device
----------------------------------
Create a virtual machine with SR-IOV device
-------------------------------------------
#. Create VN with configured physical network and vlan id::
To create a virtual machine with SR-IOV device:
#. Create a VM with configured physical network and VLAN id::
neutron net-create \
--provider:physical_network=<physical network from contrail settings tab> \
@@ -69,13 +86,13 @@ How to create VM with sriov device
neutron subnet-create <Network_name> <Subnet>
#. Create a Port::
#. Create a port::
neutron port-create \
--fixed-ip subnet_id=<subnet uuid>,ip_address=<IP address from above subnet> \
--name <name of port> <vn uuid> --binding:vnic_type direct
#. Boot VM with the port::
#. Boot the VM with the port::
nova boot \
--flavor m1.large --image <image name> \

View File

@@ -1,7 +1,7 @@
.. _fuel-plugin-contrail:
Guide to the Contrail plugin version 4.0 for Fuel 8.0
=====================================================
Guide to the Contrail plugin version 4.0-4.0.0-1 for Fuel 8.0
=============================================================
.. toctree::
:maxdepth: 2

View File

@@ -4,33 +4,42 @@ Installation Guide
Prerequisites
-------------
This guide assumes that you have `installed Fuel <https://docs.mirantis.com/openstack/fuel/fuel-8.0/pdf/Fuel-8.0-UserGuide.pdf>`_
This guide assumes that you have installed `Fuel <http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide.html>`_
and all the nodes of your future environment are discovered and functional.
Installing Contrail Plugin
--------------------------
Install Contrail Plugin
-----------------------
#. Download Contrail plugin from the `Fuel Plugins Catalog <https://software.mirantis.com/download-mirantis-openstack-fuel-plug-ins/>`_.
To install the Contrail plugin:
#. Copy the rpm downloaded at previous step to the Fuel Master node and install the plugin
#. Download the Contrail plugin from the
`Fuel Plugins Catalog <https://software.mirantis.com/download-mirantis-openstack-fuel-plug-ins/>`_.
#. Copy the rpm package downloaded at the previous step to the Fuel Master node and install the plugin
::
scp contrail-4.0-4.0.0.noarch.rpm <Fuel Master node ip>:/tmp/
scp contrail-4.0-4.0.0-1.noarch.rpm <Fuel Master node ip>:/tmp/
#. Log into the Fuel Master node and install the plugin
::
ssh <the Fuel Master node ip>
fuel plugins --install contrail-4.0-4.0.0.noarch.rpm
fuel plugins --install contrail-4.0-4.0.0-1.noarch.rpm
You should get the following output
::
Plugin <plugin-name-version>.rpm was successfully installed
#. Copy Juniper contrail install package (obtained from Juniper by subscription, more information can be found on
`official Juniper Contrail web-site <http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/>`_ )
to the Fuel Master node and run the installation script to unpack the vendor package and populate plugin repository
#. Copy the Juniper Contrail installation package to the Fuel Master node and run the installation
script to unpack the vendor package and populate the plugin repository:
.. note::
You can obtain the Juniper Contrail installation package from Juniper by subscription.
More information can be found on the
`official Juniper Contrail web-site <http://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/>`__.
::
scp contrail-install-packages_3.0.2.0-51~14.04-liberty_all.deb \
@@ -42,10 +51,14 @@ Installing Contrail Plugin
\clearpage
Configuring Contrail Plugin
----------------------------
Configure Contrail Plugin
-------------------------
#. First, you need to `create environment (page 3) <https://docs.mirantis.com/openstack/fuel/fuel-8.0/pdf/Fuel-8.0-UserGuide.pdf>`_ in Fuel UI.
To configure the Contrail plugin, follow the steps below:
#. First, you need to
`create environment <http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide.html>`_
in Fuel web UI.
.. image:: images/name_and_release.png
@@ -67,11 +80,12 @@ Configuring Contrail Plugin
.. image:: images/additional_services.png
#. Activate the plugin and fill configuration fields with correct values:
#. Enable the plugin and fill configuration fields with correct values:
* AS number for BGP Gateway nodes communication: (defaults to 64512).
* AS number for BGP Gateway nodes communication defaults to 64512
* Gateway nodes IP addresses (provided as a comma-separated list) - peer addresses for BGP interaction with border routers.
* IP addresses of gateway nodes provided as a comma-separated list - peer addresses
for BGP interaction with border routers.
.. raw:: latex
@@ -83,12 +97,15 @@ Configuring Contrail Plugin
* At least 1 Compute
* At least 1 node with Contrail-Control, Contrail-Config,Contrail-DB roles selected ( 3 or other odd number of nodes
recommended for HA)
* At least 1 node with Contrail-Control, Contrail-Config, Contrail-DB roles selected
.. note::
Three or the greater odd number of nodes recommended for HA.
* If you plan to use Heat with autoscaling, in addition to Ceilometer you need to add node with MongoDB role
These 3 roles are not necessary need to be on the same node.
These three roles are not necessary need to be on the same node.
You can place them on different nodes if needed.
.. image:: images/contrail-roles.png
@@ -99,55 +116,65 @@ Configuring Contrail Plugin
.. image:: images/node-roles.png
#. The recommended size of partition for Contrail database is 256 GB or more.
#. The recommended size of partition for the Contrail database is 256 GB or more.
#. Configure the network settings. See details at `Mirantis OpenStack User Guide (page 16) <https://docs.mirantis.com/openstack/fuel/fuel-8.0/pdf/Fuel-8.0-UserGuide.pdf>`_.
#. Configure the network settings. See details at
`Fuel User Guide <http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide.html>`_.
Open "Nodes" tab, select all the nodes and press **Configure interfaces** button
Open :guilabel:`Nodes` tab, select all the nodes and press :guilabel:`Configure interfaces` button
.. image:: images/conf-interfaces.png
Set *Private* network to the separate network interface.
**DO NOT USE THIS PHYSICAL INTERFACE FOR ANY OTHER NETWORK.**
This interface will be used by contrail vRouter.
It is recommended to set the bigger MTU for Private interfaces (e.g. 9000) if the switching hardware supports
Set Private network to the separate network interface.
.. warning::
Do not use this physical interface for any other network.
Contrail vRouter will use this interface.
Set the bigger MTU for Private interfaces, for example 9000, if switching hardware supports
Jumbo Frames.
This will enhance contrail network performance by avoiding packet fragmentation within Private network.
This will enhance contrail network performance by avoiding packet fragmentation within
Private network.
.. image:: images/public-net.png
.. warning::
**First usable addresses from the Private network will be used as VIP for Contrail controllers.**
For example, if your Private network CIDR is 192.168.200.0/24, then Contrail VIP will be **192.168.200.1**.
First usable addresses from the Private network will be used as VIP for Contrail controllers.
For example, if your Private network CIDR is ``192.168.200.0/24``, then Contrail VIP will be ``192.168.200.1``.
If you want to use other IP as VIP, you need to specify a range for this network.
.. raw:: latex
\pagebreak
9. Example network configuration
9. Example of network configuration
Hardware servers with two network interfaces are used as openstack nodes.
The interfaces configuration is following:
Use hardware servers with two network interfaces as OpenStack nodes.
The interfaces configuration is as follows:
* Management and Storage networks on the same interface with Admin net, using tagged VLANs
* Management and Storage networks are on the same interface with ``Admin`` network using tagged VLANs
* The second interface is dedicated for Public network as untagged
* The second interface is dedicated to Public network as untagged
* The forth interface is dedicated for Contrail operations as untagged (Private network)
* The forth interface is dedicated to Contrail operations as untagged (Private network)
.. image:: images/conf-interfaces2.png
.. warning::
Be sure to launch `network verification check <https://docs.mirantis.com/openstack/fuel/fuel-7.0/user-guide.html#verify-networks-ug>`_
before starting deployment. **Incorrect network configuration will result in non-functioning environment.**
Be sure to launch
`network verification check <http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide.html>`_
before starting deployment. Incorrect network configuration will result in
non-functioning environment.
#. Press **Deploy changes** to `deploy the environment (page 25) <https://docs.mirantis.com/openstack/fuel/fuel-8.0/pdf/Fuel-8.0-UserGuide.pdf>`_.
#. Press :guilabel:`Deploy changes` to `deploy the environment (page 25)
<http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide.html>`_.
After installation is finished, `Contrail Web UI <http://www.juniper.net/techpubs/en_US/contrail2.0/topics/task/configuration
/monitor-dashboard-vnc.html>`_ can be accessed by the same IP address as Horizon, but using HTTPS protocol and port 8143.
For example, if you configured public network as described on screenshot below, then Contrail Web UI can be accessed by
**https://<Public-VIP>:8143**
After installation is finished,
`Contrail web UI <http://www.juniper.net/techpubs/en_US/contrail2.0/topics/task/configuration/monitor-dashboard-vnc.html>`_
can be accessed by the same IP address as OpenStack Dashboard, but using HTTPS protocol and port 8143.
For example, if you configured public network as described on the screenshot above, then you can
access Contrail web UI through ``https://<Public-VIP>:8143``.

View File

@@ -38,20 +38,20 @@ Key terms, acronyms and abbreviations
Overview
--------
Contrail plugin for Fuel provides the functionality to add Contrail SDN for Mirantis OpenStack as networking backend option
using Fuel Web UI in a user-friendly manner.
Juniper Networks Contrail is an open software defined networking solution that automates and orchestrates the creation of
highly scalable virtual networks.
Contrail plugin for Fuel adds Contrail SDN to Mirantis OpenStack as a networking back end option
using Fuel web UI in a user-friendly manner.
Juniper Networks Contrail is an open software defined networking solution that automates and
orchestrates the creation of highly scalable virtual networks.
Contrail features:
* Powerful API calls (REST or direct python class calls)
* Analytics engine: Traffic flow reports, statistics
* Analytics engine: traffic flow reports, statistics
* Network management at 2-4 OSI layers
* Service chaining architecture: you can transparently pass traffic through service instances, such as IDS, firewalls, DPI.
* Service chaining architecture: you can transparently pass traffic through service instances
such as IDS, firewalls, and DPI.
* Fine grained virtual network access policy control

View File

@@ -1,19 +1,19 @@
Limitations
===========
* Removing Contrail-DB nodes from cluster is not supported by plugin, it can lead to data loss, so this must be
* Plugin does not support removing Contrail-DB nodes from a cluster. This can lead to data loss and must be
a manual procedure.
Adding new Contrail-DB nodes to the environment is supported.
Plugin supports adding new Contrail-DB nodes to the environment.
* In case of using contrail service chaining with service instances, you may need to add *neutron* service user
to a current tenant after you have deployed the environment:
to a current project after you have deployed the environment:
* Open Horizon dashboard, navigate to Identity - Projects page.
* Open OpenStack Dashboard, navigate to the :guilabel:`Identity - Projects` page.
* Click *modify users* button on the right side of *admin* project.
* Click :guilabel:`modify users` button on the right side of the ``admin`` project.
* Add *neutron* user to project members with *_member_* role.
* Add the ``neutron`` user to project members with ``_member_`` role.
* Changing the default OpenStack tenant name is not supported. Default tenant name should be 'admin'.
* Changing the default OpenStack project name is not supported. Default project name should be ``admin``.
* The password of OpenStack admin user should not contain following characters: **$**, **`**, **\\** and **!**
* The password of OpenStack ``admin`` user should not contain following characters: ``$``, `````, ``\\`` and ``!``

View File

@@ -1,5 +1,5 @@
New features in plugin version 4.0.0
====================================
New features in plugin version 4.0-4.0.0-1
==========================================
* Fuel 8.0 with OpenStack Liberty support

View File

@@ -1,16 +1,21 @@
Using network templates
=======================
Use network templates
=====================
Starting from Fuel 7.0 it is possible to reduce the number of logical networks.
This is implemented with the function called network templates.
For detailed information on this feature, refer to
`Operations guide <https://docs.mirantis.com/openstack/fuel/fuel-7.0/operations.html#using-networking-templates>`_
Starting from Fuel 7.0, you can reduce the number of logical networks
using network templates.
This document provides sample configuration with network template.
It is designed to get customers up and running quickly.
The provided template utilizes 3 networks: Admin (PXE), Public and Private.
.. seealso::
#. First do steps 5.3.1 - 5.3.7 from :doc:`/install_guide`
`Operations guide <http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide/cli/cli_network_template.html>`_
This document provides a sample configuration with a network template
to get customers up and running quickly.
The provided template utilizes three networks: Admin (PXE), Public, and Private.
To use the network template:
#. Perform steps 5.3.1 - 5.3.7 from :doc:`/install_guide`
#. Configure interfaces as shown on figure:
@@ -20,23 +25,22 @@ The provided template utilizes 3 networks: Admin (PXE), Public and Private.
\pagebreak
3. Next, we need to set gateway for the private network.
Here is how to do it with Fuel CLI:
3. Set a gateway for the private network:
* Login with ssh to Fuel master node.
* List existing network-groups
#. Login with ssh to Fuel master node.
#. List existing network-groups:
::
fuel network-group --env 1
* Remove and create again network-group *private* to set a gateway
#. Remove and create again network-group ``private`` to set a gateway:
::
fuel network-group --delete --network 5
fuel network-group --create --name private \
--cidr 10.109.3.0/24 --gateway 10.109.3.1 --nodegroup 1
#. Set the ``render_addr_mask`` parameter to `internal` for this network by typing:
#. Set the ``render_addr_mask`` parameter to ``internal`` for this network by typing:
::
fuel network-group --set --network 6 \
@@ -53,4 +57,4 @@ The provided template utilizes 3 networks: Admin (PXE), Public and Private.
fuel --env 1 network-template --upload --dir /root/
#. Start deploy, pressing "Deploy changes" button.
#. Start deployment by pressing :guilabel:`Deploy changes` button.

View File

@@ -1,14 +1,13 @@
Verification
============
After deploy finishes, you can verify your installation. First, proceed with basic checks described below.
Verify Contrail plugin
======================
Basic checks
------------
To verify your installation after deployment, perform the basic checks described below.
#. Check that Contrail services are running.
#. Verify that Contrail services are running.
Login to Contrail controller node and run contrail-status command. All services should be in "active" state:
::
#. Login to the Contrail controller node and run ``contrail-status`` command.
All services should be in "active" state:
::
# contrail-status
== Contrail Control ==
@@ -47,68 +46,90 @@ Basic checks
contrail-database-nodemgr active
kafka active
#. Check the list of peers and peering status
#. Verify the list of peers and peering status
Login to Contrail WebUI, go to Monitor -> Control nodes, choose any and select a “Peers” tab. You should see your compute nodes(vRouters) and external router in a list of peers. Status should be “Established”
#. Login to Contrail web UI
#. Go to :guilabel:`Monitor -> Control nodes`
#. Choose any and select a :guilabel:`Peers` tab.
You should see your compute nodes (vRouters) and external router
in a list of peers with status ``Established``
.. image:: images/check_list_of_peers.png
#. Check that external router was provisioned correctly:
#. Verify that external router has been provisioned correctly:
Login to Contrail WebUI, go to Configure -> Infrastructure -> BGP routers. Verify the IP address of router
#. Login to Contrail web UI
#. Go to :guilabel:`Configure -> Infrastructure -> BGP routers`.
#. Verify the IP address of the router
.. image:: images/check_external_router.png
After that you can use health checks in Fuel UI, also called OSTF tests.
#. Use health checks in Fuel web UI, also called OSTF tests.
OSTF tests
----------
Run OSTF tests
--------------
- **Prerequisites for OSTF:**
Prerequisites for OSTF
++++++++++++++++++++++
#. OSTF tests require two pre-defined networks created - net04 and net04_ext. The networks are created by Fuel during deployment. This section includes instructions how to create them if they were accidentally deleted. Floating IP addresses from net04_ext should be accessible from Fuel master node.
#. 3 tests from “Functional tests” set require floating IP addresses. They should be configured on external router, routable from Fuel master node and populated in Contrail/Openstack environment.
#. HA tests require at least 3 Openstack controllers.
#. “Platform services functional tests.” require Ceilometer and Mongo.
#. OSTF tests require two pre-defined networks created - ``net04`` and ``net04_ext``.
The networks are created by Fuel during deployment. This section includes
instructions how to create them if they were accidentally deleted. Floating
IP addresses from net04_ext should be accessible from Fuel master node.
#. Three tests from ``Functional tests`` set require floating IP addresses.
They should be configured on external router, routable from Fuel master node and
populated in the Openstack with Contrail environment.
#. HA tests require at least three Openstack controllers.
#. ``Platform services functional tests.`` require Ceilometer and MongoDB.
- **OSTF networks and floating IPs configuration:**
Configure OSTF networks and floating IPs
++++++++++++++++++++++++++++++++++++++++
To create networks go to Contrail WebUI -> Configure -> Networking -> Networks
To configure OSTF networks and floating IPs:
#. Create network “net04”
#. Go to Contrail web UI :guilabel:`Configure -> Networking -> Networks`
#. Create network ``net04``
.. image:: images/create_network_net04.png
#. Create network net04_ext.
#. Create network ``net04_ext``.
.. image:: images/create_network_net04_ext.png
.. image:: images/create_network_net04_ext.png
It should be marked as ``shared`` and ``external``
.. image:: images/create_network_net04_ext2.png
And have same route target as configured in an external router
.. image:: images/create_network_net04_ext3.png
It should be marked as “shared” and “external”
#. Allocate floating IP addresses from ``net04_ext``
.. image:: images/create_network_net04_ext2.png
#. Go to Contrail WebUI :guilabel:`Configure -> Networking -> Manage Floating IPs`
.. image:: images/allocate_floating_ip_addresses.png
And have same route target as configured in external router
#. Start OSTF tests.
.. image:: images/create_network_net04_ext3.png
.. seealso::
#. Allocate floating IP addresses from net04_ext
Go to Contrail WebUI --> Configure -> Networking -> Manage Floating IPs
.. image:: images/allocate_floating_ip_addresses.png
After checking networks and floating IP addresses, start OSTF tests. For more details, refer to `Fuel user-guide <https://docs.mirantis.com/openstack/fuel/fuel-7.0/user-guide.html#post-deployment-check>`_.
`Fuel user-guide <http://docs.openstack.org/developer/fuel-docs/userdocs/fuel-user-guide/verify-environment/intro-health-checks.html>`_.
Troubleshooting
---------------
Start with checking output of contrail-status command. Then check the logs for corresponding serivice.
Contrail logs are located in /var/log/contrail/ directory, and log names match with contrail service name.
Cassandra logs are located in /var/log/cassandra/, zookeeper's is in /var/log/zookeeper/.
To troubleshoot:
#. Verify output of the ``contrail-status`` command.
#. Verify the logs for corresponding serivice:
* Contrail logs are located in ``/var/log/contrail/`` directory, and log names match with contrail service name.
* Cassandra logs are located in ``/var/log/cassandra/``
* Zookeeper logs are in ``/var/log/zookeeper/``