tripleo-heat-templates/environments/neutron-ovs-dpdk.yaml
Tim Rozet b30bdb6f8e Adds service for OVS and enables ODL DPDK deployments
In order to deploy OpenDaylight with DPDK we need to copy the DPDK
config for OVS done in the neutron-ovs-dpdk service template, without
enabling OVS agent for compute nodes.  To do this correctly, we should
inherit and openvswitch service which is a common place to set OVS
configuration and parameters.  Note: vswitch::dpdk config will be called
in prenetwork setup with ovs_dpdk_config.yaml so there is no need to
include that in the step config for neutron-ovs-dpdk-agent service or
opendaylight-ovs-dpdk.

Changes Include:
 - Creates a common openvswitch service template, which in the future
   will migrate to be its own service.
 - Renames and fixes OVS DPDK configuration heat parameters in the
   openvswitch template.
 - neutron-ovs-dpdk-agent now inherits the common openvswitch template.
 - Adds opendaylight-ovs-dpdk template which also inherits common ovs
   template.
 - Uses OVS DPDK config script to allow configuring OVS DPDK in
   prenetwork config (before os-net-config runs).  This has an issue
   where hieradata is not present yet, so we have to redefine the heat
   parameters and pass them via bash.  In the future this should be
   corrected.
 - Adds opendaylight-dpdk environment file used to deploy an ODL + DPDK
   deployment.
 - Updates neutron-ovs-dpdk environment file.

Closes-Bug: 1656097
Partial-Bug: 1656096

Depends-On: I3227189691df85f265cf84bd4115d8d4c9f979f3

Change-Id: Ie80e38c2a9605d85cdf867a31b6888bfcae69e29
Signed-off-by: Tim Rozet <trozet@redhat.com>
2017-06-23 09:31:53 -04:00

32 lines
2.6 KiB
YAML

# A Heat environment that can be used to deploy DPDK with OVS
# Deploying DPDK requires enabling hugepages for the overcloud nodes
resource_registry:
OS::TripleO::Services::ComputeNeutronOvsAgent: ../puppet/services/neutron-ovs-dpdk-agent.yaml
parameter_defaults:
NeutronDatapathType: "netdev"
NeutronVhostuserSocketDir: "/var/lib/vhost_sockets"
NovaSchedulerDefaultFilters: "RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,NUMATopologyFilter"
## Deploying DPDK requires enabling hugepages for the overcloud compute nodes.
## It also requires enabling IOMMU when using the VFIO (vfio-pci) OvsDpdkDriverType.
## This can be done using ComputeKernelArgs as shown below.
#ComputeParameters:
#ComputeKernelArgs: "intel_iommu=on default_hugepagesz=2MB hugepagesz=2MB hugepages=2048"
## Attempting to deploy DPDK without appropriate values for the below parameters may lead to unstable deployments
## due to CPU contention of DPDK PMD threads.
## It is highly recommended to to enable isolcpus (via ComputeKernelArgs) on compute overcloud nodes and set the following parameters:
#OvsDpdkSocketMemory: "" # Sets the amount of hugepage memory to assign per NUMA node.
# It is recommended to use the socket closest to the PCIe slot used for the
# desired DPDK NIC. Format should be comma separated per socket string such as:
# "<socket 0 mem MB>,<socket 1 mem MB>", for example: "1024,0".
#OvsDpdkDriverType: "vfio-pci" # Ensure the Overcloud NIC to be used for DPDK supports this UIO/PMD driver.
#OvsPmdCoreList: "" # List or range of CPU cores for PMD threads to be pinned to. Note, NIC
# location to cores on socket, number of hyper-threaded logical cores, and
# desired number of PMD threads can all play a role in configuring this setting.
# These cores should be on the same socket where OvsDpdkSocketMemory is assigned.
# If using hyperthreading then specify both logical cores that would equal the
# physical core. Also, specifying more than one core will trigger multiple PMD
# threads to be spawned, which may improve dataplane performance.
#NovaVcpuPinSet: "" # Cores to pin Nova instances to. For maximum performance, select cores
# on the same NUMA node(s) selected for previous settings.