Add support for DPDK userspace networking

Add full support for DPDK; this includes a number of configuration
options to allow the number of cores and memory allocated per
NUMA node to be changed.  By default, the first core and 1024MB of
RAM of each NUMA node will be configured for DPDK use.

When DPDK is enabled, OVS bridges are configured as datapath type
'netdev' rather than type 'system' to allow use of userspace
DPDK packet processing; Security groups are also disabled, as
iptables based rules cannot be applied against userspace sockets.

DPDK device binding is undertaken using /etc/dpdk/interfaces and
the dpdk init script provided as part of the DPDK package; device
resolution is determined using the data-port configuration option
using the <bridge:<mac address> format - MAC addresses are used
to resolve underlying PCI device names for binding with DPDK.

It's assumed that hugepage memory configuration is either done as
part of system boot as kernel command line options (set via MAAS)
or using the hugepages configuration option on the nova-compute
charm.

Change-Id: Ieb2ac522b07e495f1855e304d31eef59c316c0e4
This commit is contained in:
James Page 2016-03-23 11:10:16 +00:00
parent fff134ee0e
commit acd617f4ca
17 changed files with 1160 additions and 32 deletions

View File

@ -151,3 +151,30 @@ alternatively these can also be provided as part of a juju native bundle configu
NOTE: Spaces must be configured in the underlying provider prior to attempting to use them.
NOTE: Existing deployments using os-data-network configuration options will continue to function; this option is preferred over any network space binding provided if set.
# DPDK fast packet processing support
For OpenStack Mitaka running on Ubuntu 16.04, its possible to use experimental DPDK userspace network acceleration with Open vSwitch and OpenStack.
Currently, this charm supports use of DPDK enabled devices in bridges supporting connectivity to provider networks.
To use DPDK, you'll need to have supported network cards in your server infrastructure (see [dpdk-nics][DPDK documentation]); DPDK must be enabled and configured during deployment of the charm, for example:
neutron-openvswitch:
enable-dpdk: True
data-port: "br-phynet1:a8:9d:21:cf:93:fc br-phynet2:a8:9d:21:cf:93:fd br-phynet3:a8:9d:21:cf:93:fe"
As devices are not typically named consistently across servers, multiple instances of each bridge -> mac address mapping can be provided; the charm deals with resolution of the set of bridge -> port mappings that are required for each individual unit of the charm.
DPDK requires the use of hugepages, which is not directly configured in the neutron-openvswitch charm; Hugepage configuration can either be done by providing kernel boot command line options for individual servers using MAAS or using the 'hugepages' configuration option of the nova-compute charm:
nova-compute:
hugepages: 50%
By default, the charm will configure Open vSwitch/DPDK to consume a processor core + 1G of RAM from each NUMA node on the unit being deployed; this can be tuned using the dpdk-socket-memory and dpdk-socket-cores configuration options of the charm. The userspace kernel driver can be configured using the dpdk-driver option. See config.yaml for more details.
**NOTE:** Changing dpdk-socket-* configuration options will trigger a restart of Open vSwitch, which currently causes connectivity to running instances to be lost - connectivity can only be restored with a stop/start of each instance.
**NOTE:** Enabling DPDK support automatically disables security groups for instances.
[dpdk-nics]: http://dpdk.org/doc/nics

View File

@ -120,3 +120,34 @@ options:
.
Only supported in OpenStack Liberty or newer, which has the required minimum version
of Open vSwitch.
enable-dpdk:
type: boolean
default: false
description: |
Enable DPDK fast userspace networking; this requires use of DPDK supported network
interface drivers and must be used in conjuction with the data-port configuration
option to configure each bridge with an appropriate DPDK enabled network device.
dpdk-socket-memory:
type: int
default: 1024
description: |
Amount of hugepage memory in MB to allocate per NUMA socket in deployed systems.
.
Only used when DPDK is enabled.
dpdk-socket-cores:
type: int
default: 1
description: |
Number of cores to allocate to DPDK per NUMA socket in deployed systems.
.
Only used when DPDK is enabled.
dpdk-driver:
type: string
default: uio_pci_generic
description: |
Kernel userspace device driver to use for DPDK devices, valid values include:
.
vfio-pci
uio_pci_generic
.
Only used when DPDK is enabled.

View File

@ -25,10 +25,14 @@ from charmhelpers.core.host import (
)
def add_bridge(name):
def add_bridge(name, datapath_type=None):
''' Add the named bridge to openvswitch '''
log('Creating bridge {}'.format(name))
subprocess.check_call(["ovs-vsctl", "--", "--may-exist", "add-br", name])
cmd = ["ovs-vsctl", "--", "--may-exist", "add-br", name]
if datapath_type is not None:
cmd += ['--', 'set', 'bridge', name,
'datapath_type={}'.format(datapath_type)]
subprocess.check_call(cmd)
def del_bridge(name):

View File

@ -20,7 +20,7 @@ import os
import re
import time
from base64 import b64decode
from subprocess import check_call
from subprocess import check_call, CalledProcessError
import six
import yaml
@ -45,6 +45,7 @@ from charmhelpers.core.hookenv import (
INFO,
WARNING,
ERROR,
status_set,
)
from charmhelpers.core.sysctl import create as sysctl_create
@ -1479,3 +1480,104 @@ class NetworkServiceContext(OSContextGenerator):
if self.context_complete(ctxt):
return ctxt
return {}
class InternalEndpointContext(OSContextGenerator):
"""Internal endpoint context.
This context provides the endpoint type used for communication between
services e.g. between Nova and Cinder internally. Openstack uses Public
endpoints by default so this allows admins to optionally use internal
endpoints.
"""
def __call__(self):
return {'use_internal_endpoints': config('use-internal-endpoints')}
class AppArmorContext(OSContextGenerator):
"""Base class for apparmor contexts."""
def __init__(self):
self._ctxt = None
self.aa_profile = None
self.aa_utils_packages = ['apparmor-utils']
@property
def ctxt(self):
if self._ctxt is not None:
return self._ctxt
self._ctxt = self._determine_ctxt()
return self._ctxt
def _determine_ctxt(self):
"""
Validate aa-profile-mode settings is disable, enforce, or complain.
:return ctxt: Dictionary of the apparmor profile or None
"""
if config('aa-profile-mode') in ['disable', 'enforce', 'complain']:
ctxt = {'aa-profile-mode': config('aa-profile-mode')}
else:
ctxt = None
return ctxt
def __call__(self):
return self.ctxt
def install_aa_utils(self):
"""
Install packages required for apparmor configuration.
"""
log("Installing apparmor utils.")
ensure_packages(self.aa_utils_packages)
def manually_disable_aa_profile(self):
"""
Manually disable an apparmor profile.
If aa-profile-mode is set to disabled (default) this is required as the
template has been written but apparmor is yet unaware of the profile
and aa-disable aa-profile fails. Without this the profile would kick
into enforce mode on the next service restart.
"""
profile_path = '/etc/apparmor.d'
disable_path = '/etc/apparmor.d/disable'
if not os.path.lexists(os.path.join(disable_path, self.aa_profile)):
os.symlink(os.path.join(profile_path, self.aa_profile),
os.path.join(disable_path, self.aa_profile))
def setup_aa_profile(self):
"""
Setup an apparmor profile.
The ctxt dictionary will contain the apparmor profile mode and
the apparmor profile name.
Makes calls out to aa-disable, aa-complain, or aa-enforce to setup
the apparmor profile.
"""
self()
if not self.ctxt:
log("Not enabling apparmor Profile")
return
self.install_aa_utils()
cmd = ['aa-{}'.format(self.ctxt['aa-profile-mode'])]
cmd.append(self.ctxt['aa-profile'])
log("Setting up the apparmor profile for {} in {} mode."
"".format(self.ctxt['aa-profile'], self.ctxt['aa-profile-mode']))
try:
check_call(cmd)
except CalledProcessError as e:
# If aa-profile-mode is set to disabled (default) manual
# disabling is required as the template has been written but
# apparmor is yet unaware of the profile and aa-disable aa-profile
# fails. If aa-disable learns to read profile files first this can
# be removed.
if self.ctxt['aa-profile-mode'] == 'disable':
log("Manually disabling the apparmor profile for {}."
"".format(self.ctxt['aa-profile']))
self.manually_disable_aa_profile()
return
status_set('blocked', "Apparmor profile {} failed to be set to {}."
"".format(self.ctxt['aa-profile'],
self.ctxt['aa-profile-mode']))
raise e

View File

@ -156,6 +156,7 @@ PACKAGE_CODENAMES = {
]),
'keystone': OrderedDict([
('8.0', 'liberty'),
('8.1', 'liberty'),
('9.0', 'mitaka'),
]),
'horizon-common': OrderedDict([

View File

@ -128,6 +128,13 @@ def service(action, service_name):
return subprocess.call(cmd) == 0
def systemv_services_running():
output = subprocess.check_output(
['service', '--status-all'],
stderr=subprocess.STDOUT).decode('UTF-8')
return [row.split()[-1] for row in output.split('\n') if '[ + ]' in row]
def service_running(service_name):
"""Determine whether a system service is running"""
if init_is_systemd():
@ -140,11 +147,15 @@ def service_running(service_name):
except subprocess.CalledProcessError:
return False
else:
# This works for upstart scripts where the 'service' command
# returns a consistent string to represent running 'start/running'
if ("start/running" in output or "is running" in output or
"up and running" in output):
return True
else:
return False
# Check System V scripts init script return codes
if service_name in systemv_services_running():
return True
return False
def service_available(service_name):

View File

@ -1,5 +1,7 @@
import glob
import os
import uuid
from pci import PCINetDevices
from charmhelpers.core.hookenv import (
config,
relation_get,
@ -15,7 +17,9 @@ from charmhelpers.contrib.network.ip import get_address_in_network
from charmhelpers.contrib.openstack.context import (
OSContextGenerator,
NeutronAPIContext,
parse_data_port_mappings
)
from charmhelpers.core.unitdata import kv
class OVSPluginContext(context.NeutronContext):
@ -73,6 +77,7 @@ class OVSPluginContext(context.NeutronContext):
ovs_ctxt['verbose'] = conf['verbose']
ovs_ctxt['debug'] = conf['debug']
ovs_ctxt['prevent_arp_spoofing'] = conf['prevent-arp-spoofing']
ovs_ctxt['enable_dpdk'] = conf['enable-dpdk']
net_dev_mtu = neutron_api_settings.get('network_device_mtu')
if net_dev_mtu:
@ -108,6 +113,115 @@ class L3AgentContext(OSContextGenerator):
return ctxt
def resolve_dpdk_ports():
'''
Resolve local PCI devices from configured mac addresses
using the data-port configuration option
@return: OrderDict indexed by PCI device address.
'''
ports = config('data-port')
devices = PCINetDevices()
resolved_devices = {}
db = kv()
if ports:
# NOTE: ordered dict of format {[mac]: bridge}
portmap = parse_data_port_mappings(ports)
for mac, bridge in portmap.iteritems():
pcidev = devices.get_device_from_mac(mac)
if pcidev:
# NOTE: store mac->pci allocation as post binding
# to dpdk, it disappears from PCIDevices.
db.set(mac, pcidev.pci_address)
db.flush()
pci_address = db.get(mac)
if pci_address:
resolved_devices[pci_address] = bridge
return resolved_devices
def parse_cpu_list(cpulist):
'''
Parses a linux cpulist for a numa node
@return list of cores
'''
cores = []
ranges = cpulist.split(',')
for cpu_range in ranges:
cpu_min_max = cpu_range.split('-')
cores += range(int(cpu_min_max[0]),
int(cpu_min_max[1]) + 1)
return cores
def numa_node_cores():
'''Dict of numa node -> cpu core mapping'''
nodes = {}
node_regex = '/sys/devices/system/node/node*'
for node in glob.glob(node_regex):
index = node.lstrip('/sys/devices/system/node/node')
with open(os.path.join(node, 'cpulist')) as cpulist:
nodes[index] = parse_cpu_list(cpulist.read().strip())
return nodes
class DPDKDeviceContext(OSContextGenerator):
def __call__(self):
return {'devices': resolve_dpdk_ports(),
'driver': config('dpdk-driver')}
class OVSDPDKDeviceContext(OSContextGenerator):
def cpu_mask(self):
'''
Hex formatted CPU mask based on using the first
config:dpdk-socket-cores cores of each NUMA node
in the unit.
'''
num_cores = config('dpdk-socket-cores')
mask = 0
for cores in numa_node_cores().itervalues():
for core in cores[:num_cores]:
mask = mask | 1 << core
return format(mask, '#04x')
def socket_memory(self):
'''
Formatted list of socket memory configuration for dpdk using
config:dpdk-socket-memory per NUMA node.
'''
sm_size = config('dpdk-socket-memory')
node_regex = '/sys/devices/system/node/node*'
mem_list = [str(sm_size) for _ in glob.glob(node_regex)]
if mem_list:
return ','.join(mem_list)
else:
return str(sm_size)
def device_whitelist(self):
'''Formatted list of devices to whitelist for dpdk'''
_flag = '-w {device}'
whitelist = []
for device in resolve_dpdk_ports():
whitelist.append(_flag.format(device=device))
return ' '.join(whitelist)
def __call__(self):
ctxt = {}
whitelist = self.device_whitelist()
if whitelist:
ctxt['dpdk_enabled'] = config('enable-dpdk')
ctxt['device_whitelist'] = self.device_whitelist()
ctxt['socket_memory'] = self.socket_memory()
ctxt['cpu_mask'] = self.cpu_mask()
return ctxt
SHARED_SECRET = "/etc/neutron/secret.txt"

View File

@ -1,6 +1,7 @@
import os
import shutil
from itertools import chain
import subprocess
from charmhelpers.contrib.openstack.neutron import neutron_plugin_attribute
from copy import deepcopy
@ -23,7 +24,7 @@ from charmhelpers.contrib.openstack.utils import (
import neutron_ovs_context
from charmhelpers.contrib.network.ovs import (
add_bridge,
add_bridge_port,
# add_bridge_port,
full_restart,
)
from charmhelpers.core.hookenv import (
@ -106,6 +107,8 @@ DHCP_PACKAGES = ['neutron-dhcp-agent']
METADATA_PACKAGES = ['neutron-metadata-agent']
PHY_NIC_MTU_CONF = '/etc/init/os-charm-phy-nic-mtu.conf'
TEMPLATES = 'templates/'
OVS_DEFAULT = '/etc/default/openvswitch-switch'
DPDK_INTERFACES = '/etc/dpdk/interfaces'
BASE_RESOURCE_MAP = OrderedDict([
(NEUTRON_CONF, {
@ -123,6 +126,15 @@ BASE_RESOURCE_MAP = OrderedDict([
'services': ['neutron-openvswitch-agent'],
'contexts': [neutron_ovs_context.OVSPluginContext()],
}),
(OVS_DEFAULT, {
'services': ['openvswitch-switch'],
'contexts': [neutron_ovs_context.OVSDPDKDeviceContext()],
}),
(DPDK_INTERFACES, {
'services': ['dpdk'],
'contexts': [neutron_ovs_context.DPDKDeviceContext()],
}),
(PHY_NIC_MTU_CONF, {
'services': ['os-charm-phy-nic-mtu'],
'contexts': [context.PhyNICMTUContext()],
@ -155,6 +167,7 @@ DVR_RESOURCE_MAP = OrderedDict([
'contexts': [context.ExternalPortContext()],
}),
])
TEMPLATES = 'templates/'
INT_BRIDGE = "br-int"
EXT_BRIDGE = "br-ex"
@ -170,7 +183,10 @@ def install_packages():
dkms_packages = determine_dkms_package()
if dkms_packages:
apt_install([headers_package()] + dkms_packages, fatal=True)
apt_install(filter_installed_packages(determine_packages()))
apt_install(filter_installed_packages(determine_packages()),
fatal=True)
if use_dpdk():
enable_ovs_dpdk()
def purge_packages(pkg_list):
@ -207,6 +223,9 @@ def determine_packages():
pkgs.remove('neutron-plugin-openvswitch-agent')
pkgs.append('neutron-openvswitch-agent')
if use_dpdk():
pkgs.append('openvswitch-switch-dpdk')
return pkgs
@ -246,8 +265,16 @@ def resource_map():
resource_map[NEUTRON_CONF]['services'].append(
'neutron-openvswitch-agent'
)
if not use_dpdk():
# NOTE; /etc/default/openvswitch only used for
# DPDK configuration so drop if DPDK not
# in use
del resource_map[OVS_DEFAULT]
del resource_map[DPDK_INTERFACES]
else:
del resource_map[OVS_CONF]
del resource_map[OVS_DEFAULT]
del resource_map[DPDK_INTERFACES]
return resource_map
@ -294,28 +321,52 @@ def determine_ports():
return ports
UPDATE_ALTERNATIVES = ['update-alternatives', '--set', 'ovs-vswitchd']
OVS_DPDK_BIN = '/usr/lib/openvswitch-switch-dpdk/ovs-vswitchd-dpdk'
OVS_DEFAULT_BIN = '/usr/lib/openvswitch-switch/ovs-vswitchd'
def enable_ovs_dpdk():
'''Enables the DPDK variant of ovs-vswitchd and restarts it'''
subprocess.check_call(UPDATE_ALTERNATIVES + [OVS_DPDK_BIN])
if not is_unit_paused_set():
service_restart('openvswitch-switch')
def configure_ovs():
status_set('maintenance', 'Configuring ovs')
if not service_running('openvswitch-switch'):
full_restart()
add_bridge(INT_BRIDGE)
add_bridge(EXT_BRIDGE)
datapath_type = determine_datapath_type()
add_bridge(INT_BRIDGE, datapath_type)
add_bridge(EXT_BRIDGE, datapath_type)
ext_port_ctx = None
if use_dvr():
ext_port_ctx = ExternalPortContext()()
if ext_port_ctx and ext_port_ctx['ext_port']:
add_bridge_port(EXT_BRIDGE, ext_port_ctx['ext_port'])
portmaps = DataPortContext()()
bridgemaps = parse_bridge_mappings(config('bridge-mappings'))
for provider, br in bridgemaps.iteritems():
add_bridge(br)
if not portmaps:
continue
if not use_dpdk():
portmaps = DataPortContext()()
bridgemaps = parse_bridge_mappings(config('bridge-mappings'))
for br in bridgemaps.itervalues():
add_bridge(br, datapath_type)
if not portmaps:
continue
for port, _br in portmaps.iteritems():
if _br == br:
add_bridge_port(br, port, promisc=True)
for port, _br in portmaps.iteritems():
if _br == br:
add_bridge_port(br, port, promisc=True)
else:
# NOTE: when in dpdk mode, add based on pci bus order
# with type 'dpdk'
dpdk_bridgemaps = neutron_ovs_context.resolve_dpdk_ports()
device_index = 0
for br in dpdk_bridgemaps.itervalues():
add_bridge(br, datapath_type)
add_bridge_port(br, 'dpdk{}'.format(device_index),
port_type='dpdk')
device_index += 1
# Ensure this runs so that mtu is applied to data-port interfaces if
# provided.
@ -334,6 +385,36 @@ def use_dvr():
return context.NeutronAPIContext()()['enable_dvr']
def determine_datapath_type():
'''
Determine the ovs datapath type to use
@returns string containing the datapath type
'''
if use_dpdk():
return 'netdev'
return 'system'
def use_dpdk():
'''Determine whether DPDK should be used'''
release = os_release('neutron-common', base='icehouse')
if (release >= 'mitaka' and config('enable-dpdk')):
return True
return False
# TODO: update into charm-helpers to add port_type parameter
def add_bridge_port(name, port, promisc=False, port_type=None):
''' Add a port to the named openvswitch bridge '''
# log('Adding port {} to bridge {}'.format(port, name))
cmd = ["ovs-vsctl", "--", "--may-exist", "add-port", name, port]
if port_type:
cmd += ['--', 'set', 'Interface', port,
'type={}'.format(port_type)]
subprocess.check_call(cmd)
def enable_nova_metadata():
return use_dvr() or enable_local_dhcp()

118
hooks/pci.py Normal file
View File

@ -0,0 +1,118 @@
#!/usr/bin/python
import os
import glob
import subprocess
import shlex
from charmhelpers.core.hookenv import(
log,
)
def format_pci_addr(pci_addr):
domain, bus, slot_func = pci_addr.split(':')
slot, func = slot_func.split('.')
return '{}:{}:{}.{}'.format(domain.zfill(4), bus.zfill(2), slot.zfill(2),
func)
class PCINetDevice(object):
def __init__(self, pci_address):
self.pci_address = pci_address
self.interface_name = None
self.mac_address = None
self.state = None
self.update_attributes()
def update_attributes(self):
self.update_interface_info()
def update_interface_info(self):
self.update_interface_info_eth()
def update_interface_info_eth(self):
net_devices = self.get_sysnet_interfaces_and_macs()
for interface in net_devices:
if self.pci_address == interface['pci_address']:
self.interface_name = interface['interface']
self.mac_address = interface['mac_address']
self.state = interface['state']
def get_sysnet_interfaces_and_macs(self):
net_devs = []
for sdir in glob.glob('/sys/class/net/*'):
sym_link = sdir + "/device"
if os.path.islink(sym_link):
fq_path = os.path.realpath(sym_link)
path = fq_path.split('/')
if 'virtio' in path[-1]:
pci_address = path[-2]
else:
pci_address = path[-1]
net_devs.append({
'interface': self.get_sysnet_interface(sdir),
'mac_address': self.get_sysnet_mac(sdir),
'pci_address': pci_address,
'state': self.get_sysnet_device_state(sdir),
})
return net_devs
def get_sysnet_mac(self, sysdir):
mac_addr_file = sysdir + '/address'
with open(mac_addr_file, 'r') as f:
read_data = f.read()
mac = read_data.strip()
log('mac from {} is {}'.format(mac_addr_file, mac))
return mac
def get_sysnet_device_state(self, sysdir):
state_file = sysdir + '/operstate'
with open(state_file, 'r') as f:
read_data = f.read()
state = read_data.strip()
log('state from {} is {}'.format(state_file, state))
return state
def get_sysnet_interface(self, sysdir):
return sysdir.split('/')[-1]
class PCINetDevices(object):
def __init__(self):
pci_addresses = self.get_pci_ethernet_addresses()
self.pci_devices = [PCINetDevice(dev) for dev in pci_addresses]
def get_pci_ethernet_addresses(self):
cmd = ['lspci', '-m', '-D']
lspci_output = subprocess.check_output(cmd)
pci_addresses = []
for line in lspci_output.split('\n'):
columns = shlex.split(line)
if len(columns) > 1 and columns[1] == 'Ethernet controller':
pci_address = columns[0]
pci_addresses.append(format_pci_addr(pci_address))
return pci_addresses
def update_devices(self):
for pcidev in self.pci_devices:
pcidev.update_attributes()
def get_macs(self):
macs = []
for pcidev in self.pci_devices:
if pcidev.mac_address:
macs.append(pcidev.mac_address)
return macs
def get_device_from_mac(self, mac):
for pcidev in self.pci_devices:
if pcidev.mac_address == mac:
return pcidev
return None
def get_device_from_pci_address(self, pci_addr):
for pcidev in self.pci_devices:
if pcidev.pci_address == pci_addr:
return pcidev
return None

View File

@ -0,0 +1,21 @@
###############################################################################
# [ WARNING ]
# Configuration file maintained by Juju. Local changes may be overwritten.
# Configuration managed by neutron-openvswitch charm
###############################################################################
#
# <bus> Currently only "pci" is supported
# <id> Device ID on the specified bus
# <driver> Driver to bind against (vfio-pci or uio_pci_generic)
#
# Note that depending on your network card and what you want to set up also the
# drivers ixgbe or virtio-pci might apply, but these are the default drivers
# and therefore have not to be rebound as dpdk interfaces.
#
# Be aware that those two drivers are part of linux-image-extra-<VERSION>
# package in case you run into missing module issues.
#
# <bus> <id> <driver>
{% for device in devices -%}
pci {{ device }} {{ driver }}
{% endfor -%}

View File

@ -8,6 +8,9 @@
enable_tunneling = True
local_ip = {{ local_ip }}
bridge_mappings = {{ bridge_mappings }}
{% if enable_dpdk -%}
datapath_type = netdev
{% endif -%}
[agent]
tunnel_types = {{ overlay_network_type }}
@ -19,7 +22,7 @@ veth_mtu = {{ veth_mtu }}
{% endif -%}
[securitygroup]
{% if neutron_security_groups -%}
{% if neutron_security_groups and not enable_dpdk -%}
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
{% else -%}

View File

@ -0,0 +1,9 @@
# This is a POSIX shell fragment -*- sh -*-
###############################################################################
# [ WARNING ]
# Configuration file maintained by Juju. Local changes may be overwritten.
# Configuration managed by neutron-openvswitch charm
###############################################################################
{% if dpdk_enabled -%}
DPDK_OPTS='--dpdk -c {{ cpu_mask }} -n 4 --socket-mem {{ socket_memory }} {{ device_whitelist }}'
{% endif -%}

View File

@ -0,0 +1,68 @@
# flake8: noqa
LSPCI = """
0000:00:00.0 "Host bridge" "Intel Corporation" "Haswell-E DMI2" -r02 "Intel Corporation" "Device 0000"
0000:00:03.0 "PCI bridge" "Intel Corporation" "Haswell-E PCI Express Root Port 3" -r02 "" ""
0000:00:03.2 "PCI bridge" "Intel Corporation" "Haswell-E PCI Express Root Port 3" -r02 "" ""
0000:00:05.0 "System peripheral" "Intel Corporation" "Haswell-E Address Map, VTd_Misc, System Management" -r02 "" ""
0000:00:05.1 "System peripheral" "Intel Corporation" "Haswell-E Hot Plug" -r02 "" ""
0000:00:05.2 "System peripheral" "Intel Corporation" "Haswell-E RAS, Control Status and Global Errors" -r02 "" ""
0000:00:05.4 "PIC" "Intel Corporation" "Haswell-E I/O Apic" -r02 -p20 "Intel Corporation" "Device 0000"
0000:00:11.0 "Unassigned class [ff00]" "Intel Corporation" "Wellsburg SPSR" -r05 "Intel Corporation" "Device 7270"
0000:00:11.4 "SATA controller" "Intel Corporation" "Wellsburg sSATA Controller [AHCI mode]" -r05 -p01 "Cisco Systems Inc" "Device 0067"
0000:00:16.0 "Communication controller" "Intel Corporation" "Wellsburg MEI Controller #1" -r05 "Intel Corporation" "Device 7270"
0000:00:16.1 "Communication controller" "Intel Corporation" "Wellsburg MEI Controller #2" -r05 "Intel Corporation" "Device 7270"
0000:00:1a.0 "USB controller" "Intel Corporation" "Wellsburg USB Enhanced Host Controller #2" -r05 -p20 "Intel Corporation" "Device 7270"
0000:00:1c.0 "PCI bridge" "Intel Corporation" "Wellsburg PCI Express Root Port #1" -rd5 "" ""
0000:00:1c.3 "PCI bridge" "Intel Corporation" "Wellsburg PCI Express Root Port #4" -rd5 "" ""
0000:00:1c.4 "PCI bridge" "Intel Corporation" "Wellsburg PCI Express Root Port #5" -rd5 "" ""
0000:00:1d.0 "USB controller" "Intel Corporation" "Wellsburg USB Enhanced Host Controller #1" -r05 -p20 "Intel Corporation" "Device 7270"
0000:00:1f.0 "ISA bridge" "Intel Corporation" "Wellsburg LPC Controller" -r05 "Intel Corporation" "Device 7270"
0000:00:1f.2 "SATA controller" "Intel Corporation" "Wellsburg 6-Port SATA Controller [AHCI mode]" -r05 -p01 "Cisco Systems Inc" "Device 0067"
0000:01:00.0 "PCI bridge" "Cisco Systems Inc" "VIC 82 PCIe Upstream Port" -r01 "" ""
0000:02:00.0 "PCI bridge" "Cisco Systems Inc" "VIC PCIe Downstream Port" -ra2 "" ""
0000:02:01.0 "PCI bridge" "Cisco Systems Inc" "VIC PCIe Downstream Port" -ra2 "" ""
0000:03:00.0 "Unclassified device [00ff]" "Cisco Systems Inc" "VIC Management Controller" -ra2 "Cisco Systems Inc" "Device 012e"
0000:04:00.0 "PCI bridge" "Cisco Systems Inc" "VIC PCIe Upstream Port" -ra2 "" ""
0000:05:00.0 "PCI bridge" "Cisco Systems Inc" "VIC PCIe Downstream Port" -ra2 "" ""
0000:05:01.0 "PCI bridge" "Cisco Systems Inc" "VIC PCIe Downstream Port" -ra2 "" ""
0000:05:02.0 "PCI bridge" "Cisco Systems Inc" "VIC PCIe Downstream Port" -ra2 "" ""
0000:05:03.0 "PCI bridge" "Cisco Systems Inc" "VIC PCIe Downstream Port" -ra2 "" ""
0000:08:00.0 "Fibre Channel" "Cisco Systems Inc" "VIC FCoE HBA" -ra2 "Cisco Systems Inc" "Device 012e"
0000:09:00.0 "Fibre Channel" "Cisco Systems Inc" "VIC FCoE HBA" -ra2 "Cisco Systems Inc" "Device 012e"
0000:0b:00.0 "RAID bus controller" "LSI Logic / Symbios Logic" "MegaRAID SAS-3 3108 [Invader]" -r02 "Cisco Systems Inc" "Device 00db"
0000:0f:00.0 "VGA compatible controller" "Matrox Electronics Systems Ltd." "MGA G200e [Pilot] ServerEngines (SEP1)" -r02 "Cisco Systems Inc" "Device 0101"
0000:10:00.0 "Ethernet controller" "Intel Corporation" "I350 Gigabit Network Connection" -r01 "Cisco Systems Inc" "Device 00d6"
0000:10:00.1 "Ethernet controller" "Intel Corporation" "I350 Gigabit Network Connection" -r01 "Cisco Systems Inc" "Device 00d6"
0000:7f:08.0 "System peripheral" "Intel Corporation" "Haswell-E QPI Link 0" -r02 "Intel Corporation" "Haswell-E QPI Link 0"
"""
SYS_TREE = {
'/sys/class/net/eth2': '../../devices/pci0000:00/0000:00:1c.4/0000:10:00.0/net/eth2',
'/sys/class/net/eth3': '../../devices/pci0000:00/0000:00:1c.4/0000:10:00.1/net/eth3',
'/sys/class/net/juju-br0': '../../devices/virtual/net/juju-br0',
'/sys/class/net/lo': '../../devices/virtual/net/lo',
'/sys/class/net/lxcbr0': '../../devices/virtual/net/lxcbr0',
'/sys/class/net/veth1GVRCF': '../../devices/virtual/net/veth1GVRCF',
'/sys/class/net/veth7AXEUK': '../../devices/virtual/net/veth7AXEUK',
'/sys/class/net/vethACOIJJ': '../../devices/virtual/net/vethACOIJJ',
'/sys/class/net/vethMQ819H': '../../devices/virtual/net/vethMQ819H',
'/sys/class/net/virbr0': '../../devices/virtual/net/virbr0',
'/sys/class/net/virbr0-nic': '../../devices/virtual/net/virbr0-nic',
'/sys/devices/pci0000:00/0000:00:1c.4/0000:10:00.0/net/eth2/device': '../../../0000:10:00.0',
'/sys/devices/pci0000:00/0000:00:1c.4/0000:10:00.1/net/eth3/device': '../../../0000:10:00.1',
}
FILE_CONTENTS = {
'/sys/class/net/eth2/address': 'a8:9d:21:cf:93:fc',
'/sys/class/net/eth3/address': 'a8:9d:21:cf:93:fd',
'/sys/class/net/eth2/operstate': 'up',
'/sys/class/net/eth3/operstate': 'down',
}
COMMANDS = {
'LSPCI_MD': ['lspci', '-m', '-D'],
}
NET_SETUP = {
'LSPCI_MD': LSPCI,
}

View File

@ -1,7 +1,7 @@
from test_utils import CharmTestCase
from test_utils import patch_open
from mock import patch
from mock import patch, Mock
import neutron_ovs_context as context
import charmhelpers
@ -11,6 +11,8 @@ TO_PATCH = [
'unit_get',
'get_host_ip',
'network_get_primary_address',
'glob',
'PCINetDevices',
]
@ -98,7 +100,8 @@ class OVSPluginContextTest(CharmTestCase):
'debug': True,
'bridge-mappings': "physnet1:br-data physnet2:br-data",
'flat-network-providers': 'physnet3 physnet4',
'prevent-arp-spoofing': False}
'prevent-arp-spoofing': False,
'enable-dpdk': False}
def mock_config(key=None):
if key:
@ -133,6 +136,7 @@ class OVSPluginContextTest(CharmTestCase):
'veth_mtu': 1500,
'config': 'neutron.randomconfig',
'use_syslog': True,
'enable_dpdk': False,
'network_manager': 'neutron',
'debug': True,
'core_plugin': 'neutron.randomdriver',
@ -199,6 +203,7 @@ class OVSPluginContextTest(CharmTestCase):
'network_device_mtu': 1500,
'config': 'neutron.randomconfig',
'use_syslog': True,
'enable_dpdk': False,
'network_manager': 'neutron',
'debug': True,
'core_plugin': 'neutron.randomdriver',
@ -210,6 +215,7 @@ class OVSPluginContextTest(CharmTestCase):
'vlan_ranges': 'physnet1:1000:2000',
'prevent_arp_spoofing': True,
}
self.maxDiff = None
self.assertEquals(expect, napi_ctxt())
@ -303,3 +309,158 @@ class SharedSecretContext(CharmTestCase):
_shared_secret.return_value = 'secret_thing'
self.resolve_address.return_value = '10.0.0.10'
self.assertEquals(context.SharedSecretContext()(), {})
class MockPCIDevice(object):
'''Simple wrapper to mock pci.PCINetDevice class'''
def __init__(self, address):
self.pci_address = address
TEST_CPULIST_1 = "0-3"
TEST_CPULIST_2 = "0-7,16-23"
DPDK_DATA_PORTS = (
"br-phynet3:fe:16:41:df:23:fe "
"br-phynet1:fe:16:41:df:23:fd "
"br-phynet2:fe:f2:d0:45:dc:66"
)
PCI_DEVICE_MAP = {
'fe:16:41:df:23:fd': MockPCIDevice('0000:00:1c.0'),
'fe:16:41:df:23:fe': MockPCIDevice('0000:00:1d.0'),
}
class TestDPDKUtils(CharmTestCase):
def setUp(self):
super(TestDPDKUtils, self).setUp(context, TO_PATCH)
self.config.side_effect = self.test_config.get
def test_parse_cpu_list(self):
self.assertEqual(context.parse_cpu_list(TEST_CPULIST_1),
[0, 1, 2, 3])
self.assertEqual(context.parse_cpu_list(TEST_CPULIST_2),
[0, 1, 2, 3, 4, 5, 6, 7,
16, 17, 18, 19, 20, 21, 22, 23])
@patch.object(context, 'parse_cpu_list', wraps=context.parse_cpu_list)
def test_numa_node_cores(self, _parse_cpu_list):
self.glob.glob.return_value = [
'/sys/devices/system/node/node0'
]
with patch_open() as (_, mock_file):
mock_file.read.return_value = TEST_CPULIST_1
self.assertEqual(context.numa_node_cores(),
{'0': [0, 1, 2, 3]})
self.glob.glob.assert_called_with('/sys/devices/system/node/node*')
_parse_cpu_list.assert_called_with(TEST_CPULIST_1)
def test_resolve_dpdk_ports(self):
self.test_config.set('data-port', DPDK_DATA_PORTS)
_pci_devices = Mock()
_pci_devices.get_device_from_mac.side_effect = PCI_DEVICE_MAP.get
self.PCINetDevices.return_value = _pci_devices
self.assertEqual(context.resolve_dpdk_ports(),
{'0000:00:1c.0': 'br-phynet1',
'0000:00:1d.0': 'br-phynet3'})
DPDK_PATCH = [
'parse_cpu_list',
'numa_node_cores',
'resolve_dpdk_ports',
'glob',
]
NUMA_CORES_SINGLE = {
'0': [0, 1, 2, 3]
}
NUMA_CORES_MULTI = {
'0': [0, 1, 2, 3],
'1': [4, 5, 6, 7]
}
class TestOVSDPDKDeviceContext(CharmTestCase):
def setUp(self):
super(TestOVSDPDKDeviceContext, self).setUp(context,
TO_PATCH + DPDK_PATCH)
self.config.side_effect = self.test_config.get
self.test_context = context.OVSDPDKDeviceContext()
self.test_config.set('enable-dpdk', True)
def test_device_whitelist(self):
'''Test device whitelist generation'''
self.resolve_dpdk_ports.return_value = [
'0000:00:1c.0',
'0000:00:1d.0'
]
self.assertEqual(self.test_context.device_whitelist(),
'-w 0000:00:1c.0 -w 0000:00:1d.0')
def test_socket_memory(self):
'''Test socket memory configuration'''
self.glob.glob.return_value = ['a']
self.assertEqual(self.test_context.socket_memory(),
'1024')
self.glob.glob.return_value = ['a', 'b']
self.assertEqual(self.test_context.socket_memory(),
'1024,1024')
self.test_config.set('dpdk-socket-memory', 2048)
self.assertEqual(self.test_context.socket_memory(),
'2048,2048')
def test_cpu_mask(self):
'''Test generation of hex CPU masks'''
self.numa_node_cores.return_value = NUMA_CORES_SINGLE
self.assertEqual(self.test_context.cpu_mask(), '0x01')
self.numa_node_cores.return_value = NUMA_CORES_MULTI
self.assertEqual(self.test_context.cpu_mask(), '0x11')
self.test_config.set('dpdk-socket-cores', 2)
self.assertEqual(self.test_context.cpu_mask(), '0x33')
def test_context_no_devices(self):
'''Ensure that DPDK is disable when no devices detected'''
self.resolve_dpdk_ports.return_value = []
self.assertEqual(self.test_context(), {})
def test_context_devices(self):
'''Ensure DPDK is enabled when devices are detected'''
self.resolve_dpdk_ports.return_value = [
'0000:00:1c.0',
'0000:00:1d.0'
]
self.numa_node_cores.return_value = NUMA_CORES_SINGLE
self.glob.glob.return_value = ['a']
self.assertEqual(self.test_context(), {
'cpu_mask': '0x01',
'device_whitelist': '-w 0000:00:1c.0 -w 0000:00:1d.0',
'dpdk_enabled': True,
'socket_memory': '1024'
})
class TestDPDKDeviceContext(CharmTestCase):
def setUp(self):
super(TestDPDKDeviceContext, self).setUp(context,
TO_PATCH + DPDK_PATCH)
self.config.side_effect = self.test_config.get
self.test_context = context.DPDKDeviceContext()
def test_context(self):
self.resolve_dpdk_ports.return_value = [
'0000:00:1c.0',
'0000:00:1d.0'
]
self.assertEqual(self.test_context(), {
'devices': ['0000:00:1c.0', '0000:00:1d.0'],
'driver': 'uio_pci_generic'
})
self.config.assert_called_with('dpdk-driver')

View File

@ -30,6 +30,7 @@ TO_PATCH = [
'determine_dkms_package',
'headers_package',
'status_set',
'use_dpdk',
]
head_pkg = 'linux-headers-3.15.0-5-generic'
@ -75,6 +76,7 @@ class TestNeutronOVSUtils(CharmTestCase):
super(TestNeutronOVSUtils, self).setUp(nutils, TO_PATCH)
self.neutron_plugin_attribute.side_effect = _mock_npa
self.config.side_effect = self.test_config.get
self.use_dpdk.return_value = False
def tearDown(self):
# Reset cached cache
@ -85,7 +87,8 @@ class TestNeutronOVSUtils(CharmTestCase):
_determine_packages.return_value = 'randompkg'
nutils.install_packages()
self.apt_update.assert_called_with()
self.apt_install.assert_called_with(self.filter_installed_packages())
self.apt_install.assert_called_with(self.filter_installed_packages(),
fatal=True)
@patch.object(nutils, 'determine_packages')
def test_install_packages_dkms_needed(self, _determine_packages):
@ -98,7 +101,7 @@ class TestNeutronOVSUtils(CharmTestCase):
self.apt_install.assert_has_calls([
call(['linux-headers-foobar',
'openvswitch-datapath-dkms'], fatal=True),
call(self.filter_installed_packages()),
call(self.filter_installed_packages(), fatal=True),
])
@patch.object(nutils, 'use_dvr')
@ -277,9 +280,9 @@ class TestNeutronOVSUtils(CharmTestCase):
self.test_config.set('data-port', 'eth0')
nutils.configure_ovs()
self.add_bridge.assert_has_calls([
call('br-int'),
call('br-ex'),
call('br-data')
call('br-int', 'system'),
call('br-ex', 'system'),
call('br-data', 'system')
])
self.assertTrue(self.add_bridge_port.called)
@ -289,9 +292,9 @@ class TestNeutronOVSUtils(CharmTestCase):
self.add_bridge_port.reset_mock()
nutils.configure_ovs()
self.add_bridge.assert_has_calls([
call('br-int'),
call('br-ex'),
call('br-data')
call('br-int', 'system'),
call('br-ex', 'system'),
call('br-data', 'system')
])
# Not called since we have a bogus bridge in data-ports
self.assertFalse(self.add_bridge_port.called)
@ -328,12 +331,41 @@ class TestNeutronOVSUtils(CharmTestCase):
DummyContext(return_value={'ext_port': 'eth0'})
nutils.configure_ovs()
self.add_bridge.assert_has_calls([
call('br-int'),
call('br-ex'),
call('br-data')
call('br-int', 'system'),
call('br-ex', 'system'),
call('br-data', 'system')
])
self.add_bridge_port.assert_called_with('br-ex', 'eth0')
@patch.object(neutron_ovs_context, 'resolve_dpdk_ports')
@patch.object(nutils, 'use_dvr')
@patch('charmhelpers.contrib.openstack.context.config')
def test_configure_ovs_dpdk(self, mock_config, _use_dvr,
_resolve_dpdk_ports):
_resolve_dpdk_ports.return_value = {
'0000:001c.01': 'br-phynet1',
'0000:001c.02': 'br-phynet2',
'0000:001c.03': 'br-phynet3',
}
_use_dvr.return_value = True
self.use_dpdk.return_value = True
mock_config.side_effect = self.test_config.get
self.config.side_effect = self.test_config.get
self.test_config.set('enable-dpdk', True)
nutils.configure_ovs()
self.add_bridge.assert_has_calls([
call('br-int', 'netdev'),
call('br-ex', 'netdev'),
call('br-phynet1', 'netdev'),
call('br-phynet2', 'netdev'),
call('br-phynet3', 'netdev'),
])
self.add_bridge_port.assert_has_calls([
call('br-phynet1', 'dpdk0', port_type='dpdk'),
call('br-phynet2', 'dpdk1', port_type='dpdk'),
call('br-phynet3', 'dpdk2', port_type='dpdk'),
])
@patch.object(neutron_ovs_context, 'SharedSecretContext')
def test_get_shared_secret(self, _dvr_secret_ctxt):
_dvr_secret_ctxt.return_value = \

251
unit_tests/test_pci.py Normal file
View File

@ -0,0 +1,251 @@
from test_utils import CharmTestCase, patch_open
from test_pci_helper import (
check_device,
mocked_subprocess,
mocked_filehandle,
mocked_globs,
mocked_islink,
mocked_realpath,
)
from mock import patch, MagicMock
import pci
TO_PATCH = [
'glob',
'log',
'subprocess',
]
NOT_JSON = "Im not json"
class PCITest(CharmTestCase):
def setUp(self):
super(PCITest, self).setUp(pci, TO_PATCH)
def test_format_pci_addr(self):
self.assertEqual(pci.format_pci_addr('0:0:1.1'), '0000:00:01.1')
self.assertEqual(pci.format_pci_addr(
'0000:00:02.1'), '0000:00:02.1')
class PCINetDeviceTest(CharmTestCase):
def setUp(self):
super(PCINetDeviceTest, self).setUp(pci, TO_PATCH)
@patch('os.path.islink')
@patch('os.path.realpath')
def eth_int(self, pci_address, _osrealpath, _osislink, subproc_map=None):
self.glob.glob.side_effect = mocked_globs
_osislink.side_effect = mocked_islink
_osrealpath.side_effect = mocked_realpath
self.subprocess.check_output.side_effect = mocked_subprocess(
subproc_map=subproc_map)
with patch_open() as (_open, _file):
super_fh = mocked_filehandle()
_file.readlines = MagicMock()
_open.side_effect = super_fh._setfilename
_file.read.side_effect = super_fh._getfilecontents_read
_file.readlines.side_effect = super_fh._getfilecontents_readlines
netint = pci.PCINetDevice(pci_address)
return netint
def test_base_eth_device(self):
net = self.eth_int('0000:10:00.0')
expect = {
'interface_name': 'eth2',
'mac_address': 'a8:9d:21:cf:93:fc',
'pci_address': '0000:10:00.0',
'state': 'up',
}
self.assertTrue(check_device(net, expect))
@patch('pci.PCINetDevice.get_sysnet_interfaces_and_macs')
@patch('pci.PCINetDevice.update_attributes')
def test_update_interface_info_eth(self, _update, _sysnet_ints):
dev = pci.PCINetDevice('0000:10:00.0')
_sysnet_ints.return_value = [
{
'interface': 'eth2',
'mac_address': 'a8:9d:21:cf:93:fc',
'pci_address': '0000:10:00.0',
'state': 'up'
},
{
'interface': 'eth3',
'mac_address': 'a8:9d:21:cf:93:fd',
'pci_address': '0000:10:00.1',
'state': 'down'
}
]
dev.update_interface_info_eth()
self.assertEqual(dev.interface_name, 'eth2')
@patch('os.path.islink')
@patch('os.path.realpath')
@patch('pci.PCINetDevice.get_sysnet_device_state')
@patch('pci.PCINetDevice.get_sysnet_mac')
@patch('pci.PCINetDevice.get_sysnet_interface')
@patch('pci.PCINetDevice.update_attributes')
def test_get_sysnet_interfaces_and_macs(self, _update, _interface, _mac,
_state, _osrealpath, _osislink):
dev = pci.PCINetDevice('0000:06:00.0')
self.glob.glob.return_value = ['/sys/class/net/eth2']
_interface.return_value = 'eth2'
_mac.return_value = 'a8:9d:21:cf:93:fc'
_state.return_value = 'up'
_osrealpath.return_value = ('/sys/devices/pci0000:00/0000:00:02.0/'
'0000:02:00.0/0000:03:00.0/0000:04:00.0/'
'0000:05:01.0/0000:07:00.0')
expect = {
'interface': 'eth2',
'mac_address': 'a8:9d:21:cf:93:fc',
'pci_address': '0000:07:00.0',
'state': 'up',
}
self.assertEqual(dev.get_sysnet_interfaces_and_macs(), [expect])
@patch('os.path.islink')
@patch('os.path.realpath')
@patch('pci.PCINetDevice.get_sysnet_device_state')
@patch('pci.PCINetDevice.get_sysnet_mac')
@patch('pci.PCINetDevice.get_sysnet_interface')
@patch('pci.PCINetDevice.update_attributes')
def test_get_sysnet_interfaces_and_macs_virtio(self, _update, _interface,
_mac, _state, _osrealpath,
_osislink):
dev = pci.PCINetDevice('0000:06:00.0')
self.glob.glob.return_value = ['/sys/class/net/eth2']
_interface.return_value = 'eth2'
_mac.return_value = 'a8:9d:21:cf:93:fc'
_state.return_value = 'up'
_osrealpath.return_value = ('/sys/devices/pci0000:00/0000:00:07.0/'
'virtio5')
expect = {
'interface': 'eth2',
'mac_address': 'a8:9d:21:cf:93:fc',
'pci_address': '0000:00:07.0',
'state': 'up',
}
self.assertEqual(dev.get_sysnet_interfaces_and_macs(), [expect])
@patch('pci.PCINetDevice.update_attributes')
def test_get_sysnet_mac(self, _update):
device = pci.PCINetDevice('0000:10:00.1')
with patch_open() as (_open, _file):
super_fh = mocked_filehandle()
_file.readlines = MagicMock()
_open.side_effect = super_fh._setfilename
_file.read.side_effect = super_fh._getfilecontents_read
macaddr = device.get_sysnet_mac('/sys/class/net/eth3')
self.assertEqual(macaddr, 'a8:9d:21:cf:93:fd')
@patch('pci.PCINetDevice.update_attributes')
def test_get_sysnet_device_state(self, _update):
device = pci.PCINetDevice('0000:10:00.1')
with patch_open() as (_open, _file):
super_fh = mocked_filehandle()
_file.readlines = MagicMock()
_open.side_effect = super_fh._setfilename
_file.read.side_effect = super_fh._getfilecontents_read
state = device.get_sysnet_device_state('/sys/class/net/eth3')
self.assertEqual(state, 'down')
@patch('pci.PCINetDevice.update_attributes')
def test_get_sysnet_interface(self, _update):
device = pci.PCINetDevice('0000:10:00.1')
self.assertEqual(
device.get_sysnet_interface('/sys/class/net/eth3'), 'eth3')
class PCINetDevicesTest(CharmTestCase):
def setUp(self):
super(PCINetDevicesTest, self).setUp(pci, TO_PATCH)
@patch('os.path.islink')
def pci_devs(self, _osislink, subproc_map=None):
self.glob.glob.side_effect = mocked_globs
rp_patcher = patch('os.path.realpath')
rp_mock = rp_patcher.start()
rp_mock.side_effect = mocked_realpath
_osislink.side_effect = mocked_islink
self.subprocess.check_output.side_effect = mocked_subprocess(
subproc_map=subproc_map)
with patch_open() as (_open, _file):
super_fh = mocked_filehandle()
_file.readlines = MagicMock()
_open.side_effect = super_fh._setfilename
_file.read.side_effect = super_fh._getfilecontents_read
_file.readlines.side_effect = super_fh._getfilecontents_readlines
devices = pci.PCINetDevices()
rp_patcher.stop()
return devices
def test_base(self):
devices = self.pci_devs()
self.assertEqual(len(devices.pci_devices), 2)
expect = {
'0000:10:00.0': {
'interface_name': 'eth2',
'mac_address': 'a8:9d:21:cf:93:fc',
'pci_address': '0000:10:00.0',
'state': 'up',
},
'0000:10:00.1': {
'interface_name': 'eth3',
'mac_address': 'a8:9d:21:cf:93:fd',
'pci_address': '0000:10:00.1',
'state': 'down',
},
}
for device in devices.pci_devices:
self.assertTrue(check_device(device, expect[device.pci_address]))
def test_get_pci_ethernet_addresses(self):
devices = self.pci_devs()
expect = ['0000:10:00.0', '0000:10:00.1']
self.assertEqual(devices.get_pci_ethernet_addresses(), expect)
@patch('pci.PCINetDevice.update_attributes')
def test_update_devices(self, _update):
devices = self.pci_devs()
call_count = _update.call_count
devices.update_devices()
self.assertEqual(_update.call_count, call_count + 2)
def test_get_macs(self):
devices = self.pci_devs()
expect = ['a8:9d:21:cf:93:fc', 'a8:9d:21:cf:93:fd']
self.assertEqual(devices.get_macs(), expect)
def test_get_device_from_mac(self):
devices = self.pci_devs()
expect = {
'0000:10:00.1': {
'interface_name': 'eth3',
'mac_address': 'a8:9d:21:cf:93:fd',
'pci_address': '0000:10:00.1',
'state': 'down',
},
}
self.assertTrue(check_device(
devices.get_device_from_mac('a8:9d:21:cf:93:fd'),
expect['0000:10:00.1']))
def test_get_device_from_pci_address(self):
devices = self.pci_devs()
expect = {
'0000:10:00.1': {
'interface_name': 'eth3',
'mac_address': 'a8:9d:21:cf:93:fd',
'pci_address': '0000:10:00.1',
'state': 'down',
},
}
self.assertTrue(check_device(
devices.get_device_from_pci_address('0000:10:00.1'),
expect['0000:10:00.1']))

View File

@ -0,0 +1,94 @@
#!/usr/bin/python
import pci
from test_utils import patch_open
from mock import patch, MagicMock
import pci_responses
import os
def check_device(device, attr_dict):
equal = device.interface_name == attr_dict['interface_name'] and \
device.mac_address == attr_dict['mac_address'] and \
device.pci_address == attr_dict['pci_address'] and \
device.state == attr_dict['state']
return equal
def mocked_subprocess(subproc_map=None):
def _subproc(cmd, stdin=None):
for key in pci_responses.COMMANDS.keys():
if pci_responses.COMMANDS[key] == cmd:
return subproc_map[key]
elif pci_responses.COMMANDS[key] == cmd[:-1]:
return subproc_map[cmd[-1]][key]
if not subproc_map:
subproc_map = pci_responses.NET_SETUP
return _subproc
class mocked_filehandle(object):
def _setfilename(self, fname, omode):
self.FILENAME = fname
def _getfilecontents_read(self):
return pci_responses.FILE_CONTENTS[self.FILENAME]
def _getfilecontents_readlines(self):
return pci_responses.FILE_CONTENTS[self.FILENAME].split('\n')
def mocked_globs(path):
check_path = path.rstrip('*').rstrip('/')
dirs = []
for sdir in pci_responses.SYS_TREE:
if check_path in sdir:
dirs.append(sdir)
return dirs
def mocked_islink(link):
resolved_relpath = mocked_resolve_link(link)
if pci_responses.SYS_TREE.get(resolved_relpath):
return True
else:
return False
def mocked_resolve_link(link):
resolved_relpath = None
for sdir in pci_responses.SYS_TREE:
if sdir in link:
rep_dir = "{}/{}".format(os.path.dirname(sdir),
pci_responses.SYS_TREE[sdir])
resolved_symlink = link.replace(sdir, rep_dir)
resolved_relpath = os.path.abspath(resolved_symlink)
return resolved_relpath
def mocked_realpath(link):
resolved_link = mocked_resolve_link(link)
return pci_responses.SYS_TREE[resolved_link]
@patch('pci.log')
@patch('pci.subprocess.Popen')
@patch('pci.subprocess.check_output')
@patch('pci.glob.glob')
@patch('pci.os.path.islink')
def pci_devs(_osislink, _glob, _check_output, _Popen, _log, subproc_map=None):
_glob.side_effect = mocked_globs
_osislink.side_effect = mocked_islink
_check_output.side_effect = mocked_subprocess(
subproc_map=subproc_map)
with patch_open() as (_open, _file), \
patch('pci.os.path.realpath') as _realpath:
super_fh = mocked_filehandle()
_file.readlines = MagicMock()
_open.side_effect = super_fh._setfilename
_file.read.side_effect = super_fh._getfilecontents_read
_file.readlines.side_effect = super_fh._getfilecontents_readlines
_realpath.side_effect = mocked_realpath
devices = pci.PCINetDevices()
return devices