A tool to load OpenStack clouds end to end in both control plane and data plane.
Go to file
Yichen Wang e4e294e5f8 Adding support to store OpenStack deployment info
Change-Id: I7cecf58adb074a7841c6b7842578964ff731d2ca
2015-02-13 15:56:44 -08:00
doc Adding support to store OpenStack deployment info 2015-02-13 15:56:44 -08:00
ssh Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
tools Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
vmtp Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
.coveragerc Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
.dockerignore Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
.gitignore Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
.gitreview Added .gitreview 2015-02-02 15:01:21 +00:00
.mailmap README file enhancements and DockerHub preparation 2015-02-10 16:07:22 -08:00
.testr.conf Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
CONTRIBUTING.rst Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
Dockerfile Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
HACKING.rst Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
LICENSE Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
MANIFEST.in Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
README-short.txt README file enhancements and DockerHub preparation 2015-02-10 16:07:22 -08:00
README.rst Adding support to store OpenStack deployment info 2015-02-13 15:56:44 -08:00
babel.cfg Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
cfg.default.yaml Adding support to upload images from URL 2015-02-13 00:43:12 -08:00
cfg.existing.yaml Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
compute.py Adding support to upload images from URL 2015-02-13 00:43:12 -08:00
credentials.py Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
instance.py Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
iperf_tool.py Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
monitor.py Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
network.py Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
nuttcp_tool.py Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
openstack-common.conf Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
perf_instance.py Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
perf_tool.py Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
pns_mongo.py Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
pnsdb_summary.py Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
pylintrc Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
requirements-dev.txt Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
requirements.txt Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
run_tests.sh Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
setup.cfg Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
setup.py Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
sshutils.py Adding support to store OpenStack deployment info 2015-02-13 15:56:44 -08:00
test-requirements.txt Adding support to store OpenStack deployment info 2015-02-13 15:56:44 -08:00
tox.ini Initial release of VMTP to stackforge 2015-02-09 14:14:00 -08:00
vmtp.py Adding support to store OpenStack deployment info 2015-02-13 15:56:44 -08:00

README.rst

VMTP

What is VMTP

VMTP is a data path performance tool for OpenStack clouds.

Features

VMTP is a python application that will automatically perform ping connectivity, ping round trip time measurement (latency) and TCP/UDP throughput measurement for the following flows on any OpenStack deployment:

  • VM to VM same network (private fixed IP)
  • VM to VM different network same tenant (intra-tenant L3 fixed IP)
  • VM to VM different network and tenant (floating IP inter-tenant L3)

Optionally, when an external Linux host is available:

  • External host/VM download and upload throughput/latency (L3/floating IP)

Optionally, when SSH login to any Linux host (native or virtual) is available:

  • Host to host throughput (intra-node and inter-node)

Optionally, VMTP can extract automatically CPU usage from all native hosts in the cloud during the throughput tests, provided the Ganglia monitoring service (gmond) is installed and enabled on those hosts.

For VM-related flows, VMTP will automatically create the necessary OpenStack resources (router, networks, subnets, key pairs, security groups, test VMs), perform the throughput measurements then cleanup all related resources before exiting.

In the case involving pre-existing native or virtual hosts, VMTP will SSH to the targeted hosts to perform measurements.

Pre-requisite to run VMTP

  • Access to the cloud Horizon Dashboard
  • 1 working external network pre-configured on the cloud (VMTP will pick the first one found)
  • At least 2 floating IP if an external router is configured or 3 floating IP if there is no external router configured
  • 1 Linux image available in OpenStack (any distribution)
  • A configuration file that is properly set for the cloud to test (see "Configuration File" section below)

For native/external host throughputs

  • A public key must be installed on the target hosts (see ssh password-less access below)

For pre-existing native host throughputs

  • Firewalls must be configured to allow TCP/UDP ports 5001 and TCP port 5002

For running VMTP Docker Image

Docker is installed. See here for instructions.

Sample Results Output

VMTP will display the results to stdout with the following data:

- Session general information (date, auth_url, OpenStack encaps, VMTP version...)
- List of results per flow, for each flow:
|   flow name
|   to and from IP addresses
|   to and from availability zones (if VM)
|   - results:
|   |   -TCP
|   |   |  packet size
|   |   |  throughput value
|   |   |  number of retransmissions
|   |   |  round trip time in ms
|   |   |  - CPU usage (if enabled), for each host in the openstack cluster
|   |   |  | baseline (before test starts)
|   |   |  | 1 or more readings during test
|   |   -UDP
|   |   |  - for each packet size
|   |   |  | throughput value
|   |   |  | loss rate
|   |   |  | CPU usage (if enabled)
|   |   - ICMP
|   |   |  average, min, max and stddev round trip time in ms

Detailed results can also be stored in a file in JSON format using the --json command line argument.