tripleo-heat-templates/environments/neutron-ovs-dpdk.yaml
Jaganathan Palanisamy ebf312433f Memory channels parameter default value
This change is to add memory channels parameter with default
value as "4" and it can be updated based on hardware spec.
Memory channels parameter is not derivable in mistral derive
parameters using introspection memory bank data since format
of memory slot name is not consitent on different environments.

Change-Id: I753861c69e64d341e313e8878ac629136aa5852c
Closes-Bug: #1734814
2017-11-28 05:19:00 -05:00

41 lines
3.1 KiB
YAML

# A Heat environment that can be used to deploy DPDK with OVS
# Deploying DPDK requires enabling hugepages for the overcloud nodes
resource_registry:
OS::TripleO::Services::ComputeNeutronOvsDpdk: ../puppet/services/neutron-ovs-dpdk-agent.yaml
parameter_defaults:
NeutronDatapathType: "netdev"
NeutronVhostuserSocketDir: "/var/lib/vhost_sockets"
NovaSchedulerDefaultFilters: "RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,NUMATopologyFilter"
OvsDpdkDriverType: "vfio-pci"
OvsDpdkMemoryChannels: "4"
#ComputeOvsDpdkParameters:
## Host configuration Parameters
#TunedProfileName: "cpu-partitioning"
#IsolCpusList: "" # Logical CPUs list to be isolated from the host process (applied via cpu-partitioning tuned).
# It is mandatory to provide isolated cpus for tuned to achive optimal performance.
# Example: "3-8,12-15,18"
#KernelArgs: "" # Space separated kernel args to configure hugepage and IOMMU.
# Deploying DPDK requires enabling hugepages for the overcloud compute nodes.
# It also requires enabling IOMMU when using the VFIO (vfio-pci) OvsDpdkDriverType.
# This should be done by configuring parameters via host-config-and-reboot.yaml environment file.
## Attempting to deploy DPDK without appropriate values for the below parameters may lead to unstable deployments
## due to CPU contention of DPDK PMD threads.
## It is highly recommended to to enable isolcpus (via KernelArgs) on compute overcloud nodes and set the following parameters:
#OvsDpdkSocketMemory: "" # Sets the amount of hugepage memory to assign per NUMA node.
# It is recommended to use the socket closest to the PCIe slot used for the
# desired DPDK NIC. Format should be comma separated per socket string such as:
# "<socket 0 mem MB>,<socket 1 mem MB>", for example: "1024,0".
#OvsPmdCoreList: "" # List or range of CPU cores for PMD threads to be pinned to. Note, NIC
# location to cores on socket, number of hyper-threaded logical cores, and
# desired number of PMD threads can all play a role in configuring this setting.
# These cores should be on the same socket where OvsDpdkSocketMemory is assigned.
# If using hyperthreading then specify both logical cores that would equal the
# physical core. Also, specifying more than one core will trigger multiple PMD
# threads to be spawned, which may improve dataplane performance.
#NovaVcpuPinSet: "" # Cores to pin Nova instances to. For maximum performance, select cores
# on the same NUMA node(s) selected for previous settings.
#NumDpdkInterfaceRxQueues: 1