Initial release of VMTP to stackforge

Change-Id: I30eb092d9a70dc6b3642a84887bb4604b1a3ea54
This commit is contained in:
Yichen Wang 2015-02-04 11:58:00 -08:00
parent 32c7c9ba94
commit 09baadba2c
53 changed files with 5128 additions and 0 deletions

7
.coveragerc Normal file
View File

@ -0,0 +1,7 @@
[run]
branch = True
source = vmtp
omit = vmtp/tests/*,vmtp/openstack/*
[report]
ignore-errors = True

8
.dockerignore Normal file
View File

@ -0,0 +1,8 @@
ansible
installer
requirements-dev.txt
cloud_init*
.git
.gitignore
.gitreview
.pylintrc

59
.gitignore vendored Normal file
View File

@ -0,0 +1,59 @@
*.py[cod]
# C extensions
*.so
# Packages
*.egg
*.egg-info
dist
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
.tox
nosetests.xml
.testrepository
.venv
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Complexity
output/*.html
output/*/index.html
# Sphinx
doc/build
# pbr generates these
AUTHORS
ChangeLog
# Editors
*~
.*.swp
.*sw?
*cscope*
.ropeproject/
# vmtp
*.local*
*.json

3
.mailmap Normal file
View File

@ -0,0 +1,3 @@
# Format is:
# <preferred e-mail> <other e-mail 1>
# <preferred e-mail> <other e-mail 2>

7
.testr.conf Normal file
View File

@ -0,0 +1,7 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
${PYTHON:-python} -m subunit.run discover -t ./ . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

16
CONTRIBUTING.rst Normal file
View File

@ -0,0 +1,16 @@
If you would like to contribute to the development of OpenStack,
you must follow the steps in this page:
http://docs.openstack.org/infra/manual/developers.html
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/vmtp

22
Dockerfile Normal file
View File

@ -0,0 +1,22 @@
# docker file for creating a container that has vmtp installed and ready to use
FROM ubuntu:14.04
MAINTAINER openstack-systems-group <openstack-systems-group@cisco.com>
# Install VMTP script and dependencies
RUN apt-get update && apt-get install -y \
lib32z1-dev \
libffi-dev \
libssl-dev \
libxml2-dev \
libxslt1-dev \
libyaml-dev \
openssh-client \
python \
python-dev \
python-lxml \
python-pip
COPY . /vmtp/
RUN pip install -r /vmtp/requirements.txt

4
HACKING.rst Normal file
View File

@ -0,0 +1,4 @@
vmtp Style Commandments
===============================================
Read the OpenStack Style Commandments http://docs.openstack.org/developer/hacking/

176
LICENSE Normal file
View File

@ -0,0 +1,176 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

6
MANIFEST.in Normal file
View File

@ -0,0 +1,6 @@
include AUTHORS
include ChangeLog
exclude .gitignore
exclude .gitreview
global-exclude *.pyc

314
README.md Normal file
View File

@ -0,0 +1,314 @@
# VMTP: An OpenStack TCP/UDP throughput measurement tool
VMTP is a python application that will automatically perform ping connectivity, ping round trip time measuerment (latency) and TCP/UDP throughput measurement for the following flows on any OpenStack deployment:
* VM to VM same network (private fixed IP)
* VM to VM different network same tenant (intra-tenant L3 fixed IP)
* VM to VM different network and tenant (floating IP inter-tenant L3)
Optionally, when an external Linux host is available:
* external host/VM download and upload throughput/latency (L3/floating IP)
Optionally, when ssh login to any Linux host (native or virtual) is available:
* host to host throughput (intra-node and inter-node)
For VM-related flows, VMTP will automatically create the necessary OpenStack resources (router, networks, subnets, key pairs, security groups, test VMs), perform the throughput measurements then cleanup all related resources before exiting.
In the case involving pre-existing native or virtual hosts, VMTP will ssh to the targeted hosts to perform measurements.
All TCP/UDP throughput measurements are done using the nuttcp tool by default.
The iperf tool can be used alternatively (--tp-tool iperf).
Optionally, VMTP can extract automatically CPU usage from all native hosts in the cloud during the throughput tests, provided the Ganglia monitoring service (gmond) is installed and enabled on those hosts.
Pre-requisite to run VMTP successfully:
* For VM related performance measurements:
* Access to the cloud Horizon Dashboard
* 1 working external network pre-configured on the cloud (VMTP will pick the first one found)
* at least 2 floating IP if an external router is configured or 3 floating IP if there is no external router configured
* 1 Linux image available in OpenStack (any distribution)
* a configuration file that is properly set for the cloud to test (see "Configuration File" section below)
* for native/external host throughput, a public key must be installed on the target hosts (see ssh password-less access below)
* for pre-existing native host throughputs, firewalls must be configured to allow TCP/UDP ports 5001 and TCP port 5002
* Docker if using the VMTP Docker image
## VMTP results output
VMTP will display the results to stdout with the following data:
* session general information (date, auth_url, OpenStack encaps, VMTP version...)
* list of results per flow, for each flow:
* flow name
* to and from IP addresses
* to and from availability zones (if VM)
* results:
* TCP
* throughput value
* number of retransmissions
* round trip time in ms
* CPU usage (if enabled), for each host in the openstack cluster:
* baseline (before test starts)
* 1 or more readings during test
* UDP
* for each packet size
* throughput value
* loss rate
* CPU usage (if enabled)
* ICMP
* average, min, max and stddev round trip time in ms
Detailed results can also be stored in a file in JSON format using the --json command line argument.
## How to run the VMTP tool
### VMTP Docker image
In its Docker image form, VMTP is located under the /vmtp directory in the container and can either take arguments from the host shell, or can be executed from inside the Docker image shell.
To run VMTP directly from the host shell (may require "sudo" up front if not root)
```
docker run -i -t <vmtp-docker-image-name> python /vmtp/vmtp.py <args>
```
To run VMTP from the Docker image shell:
```
docker run -i -t <vmtp-docker-image-name> /bin/bash
cd /vmtp.py
python vmtp.py <args>
```
(then type exit to exit and terminate the container instance)
All the examples below assume running from inside the Docker image shell.
### Print VMTP usage
```
usage: vmtp.py [-h] [-c <config_file>] [-r <openrc_file>]
[-m <gmond_ip>[:<port>]] [-p <password>] [-t <time>]
[--host <user>@<host_ssh_ip>[:<server-listen-if-name>]]
[--external-host <user>@<ext_host_ssh_ip>]
[--access_info {host:<hostip>, user:<user>, password:<pass>}]
[--mongod_server <server ip>] [--json <file>]
[--tp-tool nuttcp|iperf] [--hypervisor name]
[--inter-node-only] [--protocols T|U|I]
[--bandwidth <bandwidth>] [--tcpbuf <tcp_pkt_size1,...>]
[--udpbuf <udp_pkt_size1,...>] [--no-env] [-d] [-v]
[--stop-on-error]
OpenStack VM Throughput V2.0.0
optional arguments:
-h, --help show this help message and exit
-c <config_file>, --config <config_file>
override default values with a config file
-r <openrc_file>, --rc <openrc_file>
source OpenStack credentials from rc file
-m <gmond_ip>[:<port>], --monitor <gmond_ip>[:<port>]
Enable CPU monitoring (requires Ganglia)
-p <password>, --password <password>
OpenStack password
-t <time>, --time <time>
throughput test duration in seconds (default 10 sec)
--host <user>@<host_ssh_ip>[:<server-listen-if-name>]
native host throughput (targets requires ssh key)
--external-host <user>@<ext_host_ssh_ip>
external-VM throughput (target requires ssh key)
--access_info {host:<hostip>, user:<user>, password:<pass>}
access info for control host
--mongod_server <server ip>
provide mongoDB server IP to store results
--json <file> store results in json format file
--tp-tool nuttcp|iperf
transport perf tool to use (default=nuttcp)
--hypervisor name hypervisor to use in the avail zone (1 per arg, up to
2 args)
--inter-node-only only measure inter-node
--protocols T|U|I protocols T(TCP), U(UDP), I(ICMP) - default=TUI (all)
--bandwidth <bandwidth>
the bandwidth limit for TCP/UDP flows in K/M/Gbps,
e.g. 128K/32M/5G. (default=no limit)
--tcpbuf <tcp_pkt_size1,...>
list of buffer length when transmitting over TCP in
Bytes, e.g. --tcpbuf 8192,65536. (default=65536)
--udpbuf <udp_pkt_size1,...>
list of buffer length when transmitting over UDP in
Bytes, e.g. --udpbuf 128,2048. (default=128,1024,8192)
--no-env do not read env variables
-d, --debug debug flag (very verbose)
-v, --version print version of this script and exit
--stop-on-error Stop and keep everything as-is on error (must cleanup
manually)
```
### OpenStack openrc file
VMTP requires downloading an "openrc" file from the OpenStack Dashboard (Project|Acces&Security!Api Access|Download OpenStack RC File)
This file should then be passed to VMTP using the -r option or should be sourced prior to invoking VMTP.
Note: the openrc file is not needed if VMTP only runs the native host throughput option (--host)
### Configuration file
VMTP configuration files follow the yaml syntax and contain variables used by VMTP to run and collect performance data.
The default configuration is stored in the cfg.default.yaml file.
Default values should be overwritten for any cloud under test by defining new variable values in a new configuration file that follows the same format.
Variables that are not defined in the new configuration file will retain their default values.
Parameters that you are most certainly required to change are:
* the VM image name to use to run the performance tools, you will need to specify any standard Linux image (Ubuntu 12.04, 14.04, Fedora, RHEL7, CentOS...) - if needed you will need to upload an image to OpenStack manually prior to running VMTP
* VM ssh user name to use (specific to the image)
* the flavor name to use (often specific to each cloud)
* name of the availability zone to use for running the performance test VMs (also specific to each cloud)
Check the content of cfg.default.yaml file as it contains the list of configuration variables and instructions on how to set them.
Create one configuration file for your specific cloud and use the -c option to pass that file name to VMTP.
Note: the configuratin file is not needed if the VMTP only runs the native host throughput option (--host)
### Typical run on an OpenStack cloud
If executing a VMTP Docker image "docker run" (or "sudo docker run") must be placed in front of these commands unless you run a shell script directly from inside the container.
Run VMTP on an OpenStack cloud with the "cfg.nimbus.svl.yaml" configuration file, the "nimbus-openrc.sh" rc file and the "admin" password:
```
python vmtp.py -r nimbus-openrc.sh -c cfg.nimbus.svl.yaml -p admin
```
Only collect ICMP and TCP measurements:
```
python vmtp.py -r nimbus-openrc.sh -c cfg.nimbus.svl.yaml -p admin --protocols IT
```
### Collecting native host performance data
Run VMTP to get native host throughput internally to 172.29.87.29 and between 172.29.87.29 and 172.29.87.30 using the localadmin ssh username and run each tcp/udp test session for 120 seconds (instead of the default 10 seconds):
```
python vmtp.py --host localadmin@172.29.87.29 --host localadmin@172.29.87.30 --time 120
```
Note that this command requires each host to have the VMTP public key (ssh/id_rsa.pub) inserted into the ssh/authorized_keys file in the username home directory.
### Bandwidth limit for TCP/UDP flow measurements
Specify a value in --bandwidth will limit the bandwidth when performing throughput tests.
The default behavior for both TCP/UDP are unlimitted. For TCP, we are leveraging on the protocol itself to get the best performance; while for UDP, we are doing a binary search to find the optimal bandwidth.
This is useful when running vmtp on production clouds. The test tool will use up all the bandwidth that may be needed by any other live VMs if we don't set any bandwidth limit. This feature will help to prevent impacting other VMs while running the test tool.
### Cloud upload/download performance data
Run the tool to get external host to VM upload and download speed:
```
python vmtp.py --external_host 172.29.87.29 -r nimbus-openrc.sh -c cfg.nimbus.svl.yaml -p admin
```
This example requires the external host to have the VMTP public key(ssh/id_rsa.pub) inserted into the ssh/authorized_keys file in the root home directory
### Public cloud
Public clouds are special because they may not expose all OpenStack APIs and may not allow all types of operations.
Some public clouds have limitations in the way virtual networks can be used or require the use of a specific external router.
Running VMTP against a public cloud will require a specific configuration file that takes into account those specificities.
Refer to the provided public cloud sample configuration files for more information.
### ssh password-less access
For host throughput (--host), VMTP expects the target hosts to be pre-provisioned with a public key in order to allow password-less ssh.
Test VMs are created through OpenStack by VMTP with the appropriate public key to allow password-less ssh. By default, VMTP uses a default VMTP public key located in ssh/id_rsa.pub, simply append the content of that file into the .ssh/authorized_keys file under the host login home directory).
This default VMTP public key should only be used for transient test VMs and MUST NOT be used to provision native hosts since the corresponding private key is open to anybody!
To use alternate key pairs, the 'private_key_file' variable in the configuration file must be overridden to point to the file containing the private key to use to connect with SSH.
### TCP throughput measurement
The TCP throughput reported is measured using the default message size of the test tool (64KB with nuttcp). The TCP MSS (maximum segment size) used is the one suggested by the TCP-IP stack (which is dependent on the MTU).
VMTP currently does not support specifying multiple send sizes (similar to the UDP test) but could be relatively easily extended to do so.
### UDP Throughput measurement
UDP throughput is tricky because of limitations of the performance tools used, limitations of the linux kernel used and criteria for finding the throughput to report.
The default setting is to find the "optimal" throughput with packet loss rate within the 2%..5% range.
This is achieved by successive iterations at different throughput values.
In some cases, it is not possible to converge with a loss rate within that range and trying to do so may require too many iterations.
The algorithm used is empiric and tries to achieve a result within a reasonable and bounded number of iterations. In most cases the optimal throughput is found in less than 30 seconds for any given flow.
UDP measurements are only available with nuttcp (not available with iperf).
### Host Selection in Availability Zone
The --hypervisor argument can be used to specify explicitly where to run the test VM in the configured availability zone.
This can be handy for example when exact VM placement can impact the data path performance (for example rack based placement when the availability zone spans across multiple racks).
The first --hypervisor argument specifies on which host to run the test server VM. The second --hypervisor argument (in the command line) specifies on which host to run the test client VMs.
The value of the argument must match the hypervisor host name as known by OpenStack (or as displayed using "nova hypervisor-list")
Example of usage:
```
vmtp.py --inter-node-only -r admin-openrc.sh -p lab -d --json vxlan-2.json --hypervisor tme212 --hypervisor tme211
```
### Measuring throughput on pre-existing VMs
It is possible to run VMTP between pre-existing VMs that are accessible through SSH (using floating IP).
Prior to running, the VMTP public key must be installed on each VM.
The first IP passed (--host) is always the one running the server side. Optionally a server side listening interface name can be passed if clients should connect using a particular server IP.
For example to measure throughput between 2 hosts using the network attached to the server interface "eth5":
```
python vmtp.py --host localadmin@172.29.87.29:eth5 --host localadmin@172.29.87.30
```
### Docker shared volume to share files with the container
VMTP can accept files as input (e.g. configuration and openrc file) and can generate json results into a file.
It is possible to use the VMTP Docker image with files persisted on the host by using Docker shared volumes.
For example, one can decide to mount the current host directory as /vmtp/shared in the container in read-write mode.
To get a copy of the VMTP defaut configuration file from the container:
```
docker run -v $PWD:/vmtp/shared:rw <docker-vmtp-image-name> cp /vmtp/cfg.default.yaml /vmtp/shared/mycfg.yaml
```
Assume you have edited the configuration file Òmycfg.yamlÓ and retrieved an openrc file Òadmin-openrc.shÓ from Horizon on the local directory and would like to get results back in the Òres.jsonÓ file, you can export the current directory ($PWD), map it to /vmtp/shared in the container in read/write mode, then run the script in the container by using files from the shared directory
```
docker run -v $PWD:/vmtp/shared:rw -t <docker-vmtp-image-name> python /vmtp/vmtp.py -c shared/mycfg.yaml -r shared/admin-openrc.sh -p admin --json shared/res.json
cat res.json
```
## Caveats and Known Issues
* UDP throughput is not available if iperf is selected (the iperf UDP reported results are not reliable enough for iterating)
* If VMTP hangs for native hosts throughputs, check firewall rules on the hosts to allow TCP/UDP ports 5001 and TCP port 5002

29
README.rst Normal file
View File

@ -0,0 +1,29 @@
===============================
vmtp
===============================
A data path performance tool for OpenStack clouds.
* Free software: Apache license
* Documentation: http://docs.openstack.org/developer/vmtp
* Source: http://git.openstack.org/cgit/stackforge/vmtp
* Bugs: http://bugs.launchpad.net/vmtp
Features
--------
VMTP is a python application that will automatically perform ping connectivity, ping round trip time measuerment (latency) and TCP/UDP throughput measurement for the following flows on any OpenStack deployment:
* VM to VM same network (private fixed IP)
* VM to VM different network same tenant (intra-tenant L3 fixed IP)
* VM to VM different network and tenant (floating IP inter-tenant L3)
Optionally, when an external Linux host is available:
* external host/VM download and upload throughput/latency (L3/floating IP)
Optionally, when ssh login to any Linux host (native or virtual) is available:
* host to host throughput (intra-node and inter-node)
Optionally, VMTP can extract automatically CPU usage from all native hosts in the cloud during the throughput tests, provided the Ganglia monitoring service (gmond) is installed and enabled on those hosts.

2
babel.cfg Normal file
View File

@ -0,0 +1,2 @@
[python: **.py]

162
cfg.default.yaml Normal file
View File

@ -0,0 +1,162 @@
#
# VMTP default configuration file
#
# This configuration file is ALWAYS loaded by VMTP and should never be modified by users.
# To specify your own user-specific property values, always define them in a separate config file
# and pass that file to the script using -c or --config <file>
# Property values in that config file will override the default values in the current file
#
---
# Name of the image to use for launching the test VMs. This name must be
# the exact same name used in OpenStack (as shown from 'nova image-list')
# Any image running Linux should work (Fedora, Ubuntu, CentOS...)
image_name: 'Ubuntu Server 14.04'
#image_name: 'Fedora 21'
# User name to use to ssh to the test VMs
# This is specific to the image being used
ssh_vm_username: 'ubuntu'
#ssh_vm_username: fedora
# Name of the flavor to use for the test VMs
# This name must be an exact match to a flavor name known by the target
# OpenStack deployment (as shown from 'nova flavor-list')
flavor_type: 'm1.small'
# Name of the availability zone to use for the test VMs
# Must be one of the zones listed by 'nova availability-zone-list'
# If the zone selected contains more than 1 compute node, the script
# will determine inter-node and intra-node throughput. If it contains only
# 1 compute node, only intra-node troughput will be measured.
availability_zone: 'nova'
# DNS server IP addresses to use for the VM (list of 1 or more DNS servers)
# This default DNS server is available on the Internet,
# Change this to use a different DNS server if necessary,
dns_nameservers: [ '8.8.8.8' ]
# VMTP can automatically create a VM image if the image named by
# image_name is missing, for that you need to specify a server IP address
# from which the image can be retrieved using the wget protocol
# These 2 properties are not used if the image is present
server_ip_for_image: '172.29.172.152'
image_path_in_server: 'downloads/trusty-server-cloudimg-amd64-disk1.qcow2'
# -----------------------------------------------------------------------------
# These variables are not likely to be changed
# Set this variable to a network name if you want the script to reuse
# a specific existing external network. If empty, the script will reuse the
# first external network it can find (the cloud must have at least 1
# external network defined and available for use)
# When set, ignore floating ip creation and reuse existing management network for tests
reuse_network_name :
# Use of the script for special deployments
floating_ip: True
# Set this to an existing VM name if the script should not create new VM
# and reuse existing VM
reuse_existing_vm :
# Default name for the router to use to connect the internal mgmt network
# with the external network. If a router exists with this name it will be
# reused, otherwise a new router will be created
router_name: 'pns-router'
# Defaul names for the internal networks used by the
# script. If an existing network with this name exists it will be reused.
# Otherwise a new internal network will be created with that name.
# 2 networks are needed to test the case of network to network communication
internal_network_name: ['pns-internal-net', 'pns-internal-net2']
# Name of the subnets associated to the internal mgmt network
internal_subnet_name: ['pns-internal-subnet', 'pns-internal-subnet2']
# Default CIDRs to use for the internal mgmt subnet
internal_cidr: ['192.168.1.0/24' , '192.168.2.0/24']
# The public key to use to ssh to all targets (VMs, containers, hosts)
# If starting with './' is relative to the location of the VMTP script
# else can be an absolute path
public_key_file: './ssh/id_rsa.pub'
# File containing the private key to use along with the publick key
# If starting with './' is relative to the location of the script
# else can be an absolute path
private_key_file: './ssh/id_rsa'
# Name of the P&S public key in OpenStack
public_key_name: 'pns_public_key'
# name of the server VM
vm_name_server: 'TestServer'
# name of the client VM
vm_name_client: 'TestClient'
# name of the security group to create and use
security_group_name: 'pns-security'
# Location to the performance test tools.
perf_tool_path: './tools'
# ping variables
ping_count: 2
ping_pass_threshold: 80
# Max retry count for ssh to a VM (5 seconds between retries)
ssh_retry_count: 50
# General retry count
generic_retry_count: 50
# Times to run when measuring TCP Throughput
tcp_tp_loop_count: 3
# TCP throughput list of packet sizes to measure
# Can be overridden at the command line using --tcpbuf
tcp_pkt_sizes: [65536]
# UDP throughput list of packet sizes to measure
# By default we measure for small, medium and large packets
# Can be overridden at the command line using --udpbuf
udp_pkt_sizes: [128, 1024, 8192]
# UDP packet loss rate threshold in percentage beyond which bandwidth
# iterations stop and below which iteration with a higher
# bandwidth continues
# The first number is the minimal loss rate (inclusive)
# The second number is the maximum loss rate (inclusive)
# Iteration to find the "optimal" bandwidth will stop as soon as the loss rate
# falls within that range: min <= loss_rate <= max
# The final throughput measurement may return a loss rate out of this range
# as that measurement is taken on a longer time than when iterating to find
# the optimal throughput
#
udp_loss_rate_range: [2, 5]
# The default bandwidth limit (in Kbps) for TCP/UDP flow measurement
# 0 means unlimited, which can be overridden at the command line using --bandwidth
vm_bandwidth: 0
#######################################
# PNS MongoDB Connection information
#######################################
########################################
# Default MongoDB port is 27017, to override
#pns_mongod_port: <port no>
########################################
# MongoDB pns database.
########################################
pns_db: "pnsdb"
########################################
# MongoDB collection.
# use "officialdata" for offical runs only.
########################################
pns_collection: "testdata"

10
cfg.existing.yaml Normal file
View File

@ -0,0 +1,10 @@
#
# Example of configuration where we froce the use of a specific external network and
# use provider network (no floating IP)
reuse_network_name : 'prov1'
# Floating ip false is a provider network where we simply attach to it
floating_ip : False
# Floating ip is true by default:
# attach to existing network, create a floating ip and attach instance to it

14
cfg.nimbus.svl.yaml Normal file
View File

@ -0,0 +1,14 @@
# VM Throughput configuration for Nimbus (SVL6)
---
#image_name: 'ubuntu-trusty-server-cloudimg-amd64-disk1.2014-06-30'
#image_name: 'rhel-6.5_x86_64-2014-06-23-v4'
#image_name: 'centos-6.5_x86_64-2014-06-04-v3'
image_name: 'Ubuntu Server 14.04'
flavor_type: "Micro-Small"
availability_zone: "svl6-csl-b"
security_group_name: "pns-security1"
ext_net_name: "public-floating-76"
internal_cidr: ['192.168.10.0/24' , '192.168.20.0/24']

9
cfg.nimbus.yaml Normal file
View File

@ -0,0 +1,9 @@
# VM Throughput configuration for Nimbus
---
image_name: 'ubuntu-trusty-server-cloudimg-amd64-disk1.2014-09-27'
#image_name: 'rhel-6.5_x86_64-2014-06-23-v4'
#image_name: 'centos-6.5_x86_64-2014-06-04-v3'
flavor_type: "GP-Small"
availability_zone: "alln01-1-csx"

276
compute.py Normal file
View File

@ -0,0 +1,276 @@
# Copyright 2014 Cisco Systems, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
'''Module for Openstack compute operations'''
import os
import subprocess
import time
import novaclient
import novaclient.exceptions as exceptions
class Compute(object):
def __init__(self, nova_client, config):
self.novaclient = nova_client
self.config = config
def find_image(self, image_name):
try:
image = self.novaclient.images.find(name=image_name)
return image
except novaclient.exceptions.NotFound:
print 'ERROR: Didnt find the image %s' % (image_name)
return None
def copy_and_upload_image(self, final_image_name, server_ip, image_path):
'''
Copies locally via wget and Uploads image in Nova, if image is
not present on Nova post Upload, deletes it
'''
wget_cmd = "wget --tries=1 http://" + str(server_ip) + "/" + str(image_path)
try:
subprocess.check_output(wget_cmd, shell=True)
except subprocess.CalledProcessError:
print 'ERROR: Failed to download, check filename %s via Wget' % (wget_cmd)
return 0
my_cwd = os.getcwd()
my_file_name = os.path.basename(image_path)
abs_fname_path = my_cwd + "/" + my_file_name
rm_file_cmd = "rm " + abs_fname_path
if os.path.isfile(abs_fname_path):
# upload in glance
glance_cmd = "glance image-create --name=\"" + str(final_image_name) + \
"\" --disk-format=qcow2" + " --container-format=bare < " + \
str(my_file_name)
subprocess.check_output(glance_cmd, shell=True)
# remove the image file from local dir
subprocess.check_output(rm_file_cmd, shell=True)
# check for the image in glance
glance_check_cmd = "glance image-list"
print "Will update image to glance via CLI: %s" % (glance_cmd)
result = subprocess.check_output(glance_check_cmd, shell=True)
if final_image_name in result:
print 'Image: %s successfully Uploaded in Nova' % (final_image_name)
return 1
else:
print 'Glance image status:\n %s' % (result)
print 'ERROR: Didnt find %s image in Nova' % (final_image_name)
return 0
else:
print 'ERROR: image %s not copied over locally via %s' % (my_file_name, wget_cmd)
return 0
# Remove keypair name from openstack if exists
def remove_public_key(self, name):
keypair_list = self.novaclient.keypairs.list()
for key in keypair_list:
if key.name == name:
self.novaclient.keypairs.delete(name)
print 'Removed public key %s' % (name)
break
# Test if keypair file is present if not create it
def create_keypair(self, name, private_key_pair_file):
self.remove_public_key(name)
keypair = self.novaclient.keypairs.create(name)
# Now write the keypair to the file
kpf = os.open(private_key_pair_file,
os.O_WRONLY | os.O_CREAT, 0o600)
with os.fdopen(kpf, 'w') as kpf:
kpf.write(keypair.private_key)
return keypair
# Add an existing public key to openstack
def add_public_key(self, name, public_key_file):
self.remove_public_key(name)
# extract the public key from the file
public_key = None
try:
with open(os.path.expanduser(public_key_file)) as pkf:
public_key = pkf.read()
except IOError as exc:
print 'ERROR: Cannot open public key file %s: %s' % \
(public_key_file, exc)
return None
print 'Adding public key %s' % (name)
keypair = self.novaclient.keypairs.create(name, public_key)
return keypair
def find_network(self, label):
net = self.novaclient.networks.find(label=label)
return net
# Create a server instance with name vmname
# if exists delete and recreate
def create_server(self, vmname, image, flavor, key_name,
nic, sec_group, avail_zone=None, user_data=None,
retry_count=10):
# Also attach the created security group for the test
instance = self.novaclient.servers.create(name=vmname,
image=image,
flavor=flavor,
key_name=key_name,
nics=nic,
availability_zone=avail_zone,
userdata=user_data,
security_groups=[sec_group.id])
flag_exist = self.find_server(vmname, retry_count)
if flag_exist:
return instance
else:
return None
def get_server_list(self):
servers_list = self.novaclient.servers.list()
return servers_list
def find_floating_ips(self):
floating_ip = self.novaclient.floating_ips.list()
return floating_ip
# Return the server network for a server
def find_server_network(self, vmname):
servers_list = self.get_server_list()
for server in servers_list:
if server.name == vmname and server.status == "ACTIVE":
return server.networks
return None
# Returns True if server is present false if not.
# Retry for a few seconds since after VM creation sometimes
# it takes a while to show up
def find_server(self, vmname, retry_count):
for retry_attempt in range(retry_count):
servers_list = self.get_server_list()
for server in servers_list:
if server.name == vmname and server.status == "ACTIVE":
return True
# Sleep between retries
if self.config.debug:
print "[%s] VM not yet found, retrying %s of %s" \
% (vmname, (retry_attempt + 1), retry_count)
time.sleep(2)
print "[%s] VM not found, after %s attempts" % (vmname, retry_count)
return False
# Returns True if server is found and deleted/False if not,
# retry the delete if there is a delay
def delete_server_by_name(self, vmname):
servers_list = self.get_server_list()
for server in servers_list:
if server.name == vmname:
print 'deleting server %s' % (server)
self.novaclient.servers.delete(server)
return True
return False
def delete_server(self, server):
self.novaclient.servers.delete(server)
def find_flavor(self, flavor_type):
flavor = self.novaclient.flavors.find(name=flavor_type)
return flavor
#
# Return a list of hosts which are in a specific availability zone
# May fail per policy in that case return an empty list
def list_hypervisor(self, zone_info):
if self.config.hypervisors:
print 'Using hypervisors:' + ', '.join(self.config.hypervisors)
return self.config.hypervisors
avail_list = []
try:
host_list = self.novaclient.hosts.list()
for host in host_list:
if host.zone == zone_info:
avail_list.append(host.host_name)
except novaclient.exceptions.Forbidden:
print ('Operation Forbidden: could not retrieve list of servers'
' in AZ (likely no permission)')
return avail_list
# Given 2 VMs test if they are running on same Host or not
def check_vm_placement(self, vm_instance1, vm_instance2):
try:
server_instance_1 = self.novaclient.servers.get(vm_instance1)
server_instance_2 = self.novaclient.servers.get(vm_instance2)
if server_instance_1.hostId == server_instance_2.hostId:
return True
else:
return False
except novaclient.exceptions:
print "Exception in retrieving the hostId of servers"
# Create a new security group with appropriate rules
def security_group_create(self):
# check first the security group exists
# May throw exceptions.NoUniqueMatch or NotFound
try:
group = self.novaclient.security_groups.find(name=self.config.security_group_name)
return group
except exceptions.NotFound:
group = self.novaclient.security_groups.create(name=self.config.security_group_name,
description="PNS Security group")
# Once security group try to find it iteratively
# (this check may no longer be necessary)
for _ in range(self.config.generic_retry_count):
group = self.novaclient.security_groups.get(group)
if group:
self.security_group_add_rules(group)
return group
else:
time.sleep(1)
return None
# except exceptions.NoUniqueMatch as exc:
# raise exc
# Delete a security group
def security_group_delete(self, group):
if group:
print "Deleting security group"
self.novaclient.security_groups.delete(group)
# Add rules to the security group
def security_group_add_rules(self, group):
# Allow ping traffic
self.novaclient.security_group_rules.create(group.id,
ip_protocol="icmp",
from_port=-1,
to_port=-1)
# Allow SSH traffic
self.novaclient.security_group_rules.create(group.id,
ip_protocol="tcp",
from_port=22,
to_port=22)
# Allow TCP/UDP traffic for perf tools like iperf/nuttcp
# 5001: Data traffic (standard iperf data port)
# 5002: Control traffic (non standard)
# note that 5000/tcp is already picked by openstack keystone
self.novaclient.security_group_rules.create(group.id,
ip_protocol="tcp",
from_port=5001,
to_port=5002)
self.novaclient.security_group_rules.create(group.id,
ip_protocol="udp",
from_port=5001,
to_port=5001)

110
credentials.py Normal file
View File

@ -0,0 +1,110 @@
# Copyright 2014 Cisco Systems, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# Module for credentials in Openstack
import getpass
import os
import re
class Credentials(object):
def get_credentials(self):
dct = {}
dct['username'] = self.rc_username
dct['password'] = self.rc_password
dct['auth_url'] = self.rc_auth_url
dct['tenant_name'] = self.rc_tenant_name
return dct
def get_nova_credentials(self):
dct = {}
dct['username'] = self.rc_username
dct['api_key'] = self.rc_password
dct['auth_url'] = self.rc_auth_url
dct['project_id'] = self.rc_tenant_name
return dct
def get_nova_credentials_v2(self):
dct = self.get_nova_credentials()
dct['version'] = 2
return dct
#
# Read a openrc file and take care of the password
# The 2 args are passed from the command line and can be None
#
def __init__(self, openrc_file, pwd, no_env):
self.rc_password = None
self.rc_username = None
self.rc_tenant_name = None
self.rc_auth_url = None
success = True
if openrc_file:
if os.path.exists(openrc_file):
export_re = re.compile('export OS_([A-Z_]*)="?(.*)')
for line in open(openrc_file):
line = line.strip()
mstr = export_re.match(line)
if mstr:
# get rif of posible trailing double quote
# the first one was removed by the re
name = mstr.group(1)
value = mstr.group(2)
if value.endswith('"'):
value = value[:-1]
# get rid of password assignment
# echo "Please enter your OpenStack Password: "
# read -sr OS_PASSWORD_INPUT
# export OS_PASSWORD=$OS_PASSWORD_INPUT
if value.startswith('$'):
continue
# now match against wanted variable names
if name == 'USERNAME':
self.rc_username = value
elif name == 'AUTH_URL':
self.rc_auth_url = value
elif name == 'TENANT_NAME':
self.rc_tenant_name = value
else:
print 'Error: rc file does not exist %s' % (openrc_file)
success = False
elif not no_env:
# no openrc file passed - we assume the variables have been
# sourced by the calling shell
# just check that they are present
for varname in ['OS_USERNAME', 'OS_AUTH_URL', 'OS_TENANT_NAME']:
if varname not in os.environ:
print 'Warning: %s is missing' % (varname)
success = False
if success:
self.rc_username = os.environ['OS_USERNAME']
self.rc_auth_url = os.environ['OS_AUTH_URL']
self.rc_tenant_name = os.environ['OS_TENANT_NAME']
# always override with CLI argument if provided
if pwd:
self.rc_password = pwd
# if password not know, check from env variable
elif self.rc_auth_url and not self.rc_password and success:
if 'OS_PASSWORD' in os.environ and not no_env:
self.rc_password = os.environ['OS_PASSWORD']
else:
# interactively ask for password
self.rc_password = getpass.getpass(
'Please enter your OpenStack Password: ')
if not self.rc_password:
self.rc_password = ""

75
doc/source/conf.py Executable file
View File

@ -0,0 +1,75 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
#'sphinx.ext.intersphinx',
'oslosphinx'
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'vmtp'
copyright = u'2013, OpenStack Foundation'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
#intersphinx_mapping = {'http://docs.python.org/': None}

View File

@ -0,0 +1,4 @@
============
Contributing
============
.. include:: ../../CONTRIBUTING.rst

25
doc/source/index.rst Normal file
View File

@ -0,0 +1,25 @@
.. vmtp documentation master file, created by
sphinx-quickstart on Tue Jul 9 22:26:36 2013.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to vmtp's documentation!
========================================================
Contents:
.. toctree::
:maxdepth: 2
readme
installation
usage
contributing
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -0,0 +1,12 @@
============
Installation
============
At the command line::
$ pip install vmtp
Or, if you have virtualenvwrapper installed::
$ mkvirtualenv vmtp
$ pip install vmtp

1
doc/source/readme.rst Normal file
View File

@ -0,0 +1 @@
.. include:: ../../README.rst

7
doc/source/usage.rst Normal file
View File

@ -0,0 +1,7 @@
========
Usage
========
To use vmtp in a project::
import vmtp

284
instance.py Normal file
View File

@ -0,0 +1,284 @@
# Copyright 2014 Cisco Systems, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import os
import re
import stat
import subprocess
import monitor
import sshutils
# a dictionary of sequence number indexed by a name prefix
prefix_seq = {}
#
# An openstack instance (can be a VM or a LXC)
#
class Instance(object):
def __init__(self, name, config, comp=None, net=None):
if name not in prefix_seq:
prefix_seq[name] = 1
seq = prefix_seq[name]
prefix_seq[name] = seq + 1
self.name = name + str(seq)
self.comp = comp
self.net = net
self.az = None
self.config = config
# internal network IP
self.internal_ip = None
self.ssh_ip = None
self.ssh_ip_id = None
self.ssh_user = config.ssh_vm_username
self.instance = None
self.ssh = None
if config.gmond_svr_ip:
self.gmond_svr = config.gmond_svr_ip
else:
self.gmond_svr = None
if config.gmond_svr_port:
self.gmond_port = int(config.gmond_svr_port)
else:
self.gmond_port = 0
# Setup the ssh connectivity
# Returns True if success
def setup_ssh(self, ssh_ip, ssh_user):
# used for displaying the source IP in json results
if not self.internal_ip:
self.internal_ip = ssh_ip
self.ssh_ip = ssh_ip
self.ssh_user = ssh_user
self.ssh = sshutils.SSH(self.ssh_user, self.ssh_ip,
key_filename=self.config.private_key_file,
connect_retry_count=self.config.ssh_retry_count)
return True
# Create a new VM instance, associate a floating IP for ssh access
# and extract internal network IP
# Retruns True if success, False otherwise
def create(self, image, flavor_type,
keypair, nics,
az,
internal_network_name,
sec_group,
init_file_name=None):
# if ssh is created it means this is a native host not a vm
if self.ssh:
return True
self.buginf('Starting on zone %s', az)
self.az = az
if init_file_name:
user_data = open(init_file_name)
else:
user_data = None
self.instance = self.comp.create_server(self.name,
image,
flavor_type,
keypair,
nics,
sec_group,
az,
user_data,
self.config.generic_retry_count)
if user_data:
user_data.close()
if not self.instance:
self.display('Server creation failed')
self.dispose()
return False
# If reusing existing management network skip the floating ip creation and association to VM
# Assume management network has direct access
if self.config.reuse_network_name:
self.ssh_ip = self.instance.networks[internal_network_name][0]
else:
fip = self.net.create_floating_ip()
if not fip:
self.display('Floating ip creation failed')
return False
self.ssh_ip = fip['floatingip']['floating_ip_address']
self.ssh_ip_id = fip['floatingip']['id']
self.buginf('Floating IP %s created', self.ssh_ip)
self.buginf('Started - associating floating IP %s', self.ssh_ip)
self.instance.add_floating_ip(self.ssh_ip)
# extract the IP for the data network
self.internal_ip = self.instance.networks[internal_network_name][0]
self.buginf('Internal network IP: %s', self.internal_ip)
self.buginf('SSH IP: %s', self.ssh_ip)
# create ssh session
if not self.setup_ssh(self.ssh_ip, self.config.ssh_vm_username):
return False
return True
# Send a command on the ssh session
# returns stdout
def exec_command(self, cmd, timeout=30):
(status, cmd_output, err) = self.ssh.execute(cmd, timeout=timeout)
if status:
self.display('ERROR cmd=%s' % (cmd))
if cmd_output:
self.display("%s", cmd_output)
if err:
self.display('error=%s' % (err))
return None
self.buginf('%s', cmd_output)
return cmd_output
# Display a status message with the standard header that has the instance
# name (e.g. [foo] some text)
def display(self, fmt, *args):
print ('[%s] ' + fmt) % ((self.name,) + args)
# Debugging message, to be printed only in debug mode
def buginf(self, fmt, *args):
if self.config.debug:
self.display(fmt, *args)
# Ping an IP from this instance
def ping_check(self, target_ip, ping_count, pass_threshold):
return self.ssh.ping_check(target_ip, ping_count, pass_threshold)
# Given a message size verify if ping without fragmentation works or fails
# Returns True if success
def ping_do_not_fragment(self, msg_size, ip_address):
cmd = "ping -M do -c 1 -s " + str(msg_size) + " " + ip_address
cmd_output = self.exec_command(cmd)
match = re.search('100% packet loss', cmd_output)
if match:
return False
else:
return True
# Set the interface IP address and mask
def set_interface_ip(self, if_name, ip, mask):
self.buginf('Setting interface %s to %s mask %s', if_name, ip, mask)
cmd2apply = "sudo ifconfig %s %s netmask %s" % (if_name, ip, mask)
(rc, _, _) = self.ssh.execute(cmd2apply)
return rc
# Get an interface IP address (returns None if error)
def get_interface_ip(self, if_name):
self.buginf('Getting interface %s IP and mask', if_name)
cmd2apply = "ifconfig %s" % (if_name)
(rc, res, _) = self.ssh.execute(cmd2apply)
if rc:
return None
# eth5 Link encap:Ethernet HWaddr 90:e2:ba:40:74:05
# inet addr:172.29.87.29 Bcast:172.29.87.31 Mask:255.255.255.240
# inet6 addr: fe80::92e2:baff:fe40:7405/64 Scope:Link
match = re.search(r'inet addr:([\d\.]*) ', res)
if not match:
return None
return match.group(1)
# Set an interface MTU to passed in value
def set_interface_mtu(self, if_name, mtu):
self.buginf('Setting interface %s mtu to %d', if_name, mtu)
cmd2apply = "sudo ifconfig %s mtu %d" % (if_name, mtu)
(rc, _, _) = self.ssh.execute(cmd2apply)
return rc
# Get the MTU of an interface
def get_interface_mtu(self, if_name):
cmd = "cat /sys/class/net/%s/mtu" % (if_name)
cmd_output = self.exec_command(cmd)
return int(cmd_output)
# scp a file from the local host to the instance
# Returns True if dest file already exists or scp succeeded
# False in case of scp error
def scp(self, tool_name, source, dest):
# check if the dest file is already present
if self.ssh.stat(dest):
self.buginf('tool %s already present - skipping install',
tool_name)
return True
# scp over the tool binary
# first chmod the local copy since git does not keep the permission
os.chmod(source, stat.S_IRWXU | stat.S_IRWXG | stat.S_IRWXO)
# scp to the target
scp_opts = '-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no'
scp_cmd = 'scp -i %s %s %s %s@%s:%s' % (self.config.private_key_file,
scp_opts,
source,
self.ssh_user,
self.ssh_ip,
dest)
self.buginf('Copying %s to target...', tool_name)
self.buginf(scp_cmd)
devnull = open(os.devnull, 'wb')
rc = subprocess.call(scp_cmd, shell=True,
stdout=devnull, stderr=devnull)
if rc:
self.display('Copy to target failed rc=%d', rc)
self.display(scp_cmd)
return False
return True
def get_cmd_duration(self):
'''Get the duration of the client run
Will normally return the time configured in config.time
If cpu monitoring is enabled will make sure that this time is at least
30 seconds (to be adjusted based on metric collection frequency)
'''
if self.gmond_svr:
return max(30, self.config.time)
return self.config.time
def exec_with_cpu(self, cmd):
'''If cpu monitoring is enabled (--monitor) collect CPU in the background
while the test is running
:param duration: how long the command will run in seconds
:return: a tuple (cmd_output, cpu_load)
'''
# ssh timeout should be at least set to the command duration
# we add 20 seconds to it as a safety
timeout = self.get_cmd_duration() + 20
if self.gmond_svr:
gmon = monitor.Monitor(self.gmond_svr, self.gmond_port)
# Adjust this frequency based on the collectors update frequency
# Here we assume 10 second and a max of 20 samples
gmon.start_monitoring_thread(freq=10, count=20)
cmd_output = self.exec_command(cmd, timeout)
gmon.stop_monitoring_thread()
# insert the cpu results into the results
cpu_load = gmon.build_cpu_metrics()
else:
cmd_output = self.exec_command(cmd, timeout)
cpu_load = None
return (cmd_output, cpu_load)
# Delete the floating IP
# Delete the server instance
# Dispose the ssh session
def dispose(self):
if self.ssh_ip_id:
self.net.delete_floating_ip(self.ssh_ip_id)
self.buginf('Floating IP %s deleted', self.ssh_ip)
self.ssh_ip_id = None
if self.instance:
self.comp.delete_server(self.instance)
self.buginf('Instance deleted')
self.instance = None
if self.ssh:
self.ssh.close()
self.ssh = None

206
iperf_tool.py Normal file
View File

@ -0,0 +1,206 @@
# Copyright 2014 Cisco Systems, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import re
from perf_tool import PerfTool
# The resulting unit should be in K
MULTIPLIERS = {'K': 1,
'M': 1.0e3,
'G': 1.0e6}
def get_bdw_kbps(bdw, bdw_unit):
if not bdw_unit:
# bits/sec
return bdw / 1000
if bdw_unit in MULTIPLIERS:
return int(bdw * MULTIPLIERS[bdw_unit])
print('Error: unknown multiplier: ' + bdw_unit)
return bdw
class IperfTool(PerfTool):
def __init__(self, instance, perf_tool_path):
PerfTool.__init__(self, 'iperf', perf_tool_path, instance)
def get_server_launch_cmd(self):
'''Return the command to launch the server side.'''
# Need 1 server for tcp (port 5001) and 1 for udp (port 5001)
return [self.dest_path + ' -s >/dev/null &',
self.dest_path + ' -s -u >/dev/null &']
def run_client(self, target_ip, target_instance,
mss=None, bandwidth=0, bidirectional=False):
'''Run the test
:return: list containing one or more dictionary results
'''
res_list = []
# Get list of protocols and packet sizes to measure
(proto_list, proto_pkt_sizes) = self.get_proto_profile()
for udp, pkt_size_list in zip(proto_list, proto_pkt_sizes):
# bidirectional is not supported for udp
# (need to find the right iperf options to make it work as there are
# issues for the server to send back results to the client in reverse
# direction
if udp:
bidir = False
loop_count = 1
else:
# For accuracy purpose, TCP throughput will be measured 3 times
bidir = bidirectional
loop_count = self.instance.config.tcp_tp_loop_count
for pkt_size in pkt_size_list:
for _ in xrange(loop_count):
res = self.run_client_dir(target_ip, mss,
bandwidth_kbps=bandwidth,
bidirectional=bidir,
udp=udp,
length=pkt_size)
# for bidirectional the function returns a list of 2 results
res_list.extend(res)
return res_list
def run_client_dir(self, target_ip,
mss,
bidirectional=False,
bandwidth_kbps=0,
udp=False,
length=0,
no_cpu_timed=0):
'''Run client for given protocol and packet size
:param bandwidth_kbps: transmit rate limit in Kbps
:param udp: if true get UDP throughput, else get TCP throughput
:param length: length of network write|read buf (default 1K|8K/udp, 64K/tcp)
for udp is the packet size
:param no_cpu_timed: if non zero will disable cpu collection and override
the time with the provided value - used mainly for udp
to find quickly the optimal throughput using short
tests at various throughput values
:return: a list of dictionary with the 1 or 2 results (see parse_results())
'''
# run client using the default TCP window size (tcp window
# scaling is normally enabled by default so setting explicit window
# size is not going to help achieve better results)
opts = ''
# run iperf client using the default TCP window size (tcp window
# scaling is normally enabled by default so setting explicit window
# size is not going to help achieve better results)
if mss:
opts += " -M " + str(mss)
if bidirectional:
opts += " -r"
if length:
opts += " -l" + str(length)
if udp:
opts += " -u"
# for UDP if the bandwidth is not provided we need to calculate
# the optimal bandwidth
if not bandwidth_kbps:
udp_res = self.find_udp_bdw(length, target_ip)
if 'error' in udp_res:
return [udp_res]
if not self.instance.gmond_svr:
# if we do not collect CPU we might as well return
# the results found through iteration
return [udp_res]
bandwidth_kbps = udp_res['throughput_kbps']
if bandwidth_kbps:
opts += " -b%dK" % (bandwidth_kbps)
if no_cpu_timed:
duration_sec = no_cpu_timed
else:
duration_sec = self.instance.get_cmd_duration()
cmd = "%s -c %s -t %d %s" % (self.dest_path,
target_ip,
duration_sec,
opts)
self.instance.buginf(cmd)
if no_cpu_timed:
# force the timeout value with 20 second extra for the command to
# complete and do not collect CPU
cpu_load = None
cmd_out = self.instance.exec_command(cmd, duration_sec + 20)
else:
(cmd_out, cpu_load) = self.instance.exec_with_cpu(cmd)
if udp:
# Decode UDP output (unicast and multicast):
#
# [ 3] local 127.0.0.1 port 54244 connected with 127.0.0.1 port 5001
# [ ID] Interval Transfer Bandwidth
# [ 3] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec
# [ 3] Sent 893 datagrams
# [ 3] Server Report:
# [ ID] Interval Transfer Bandwidth Jitter Lost/Total Da
# [ 3] 0.0-10.0 sec 1.25 MBytes 1.05 Mbits/sec 0.032 ms 1/894 (0.11%)
# [ 3] 0.0-15.0 sec 14060 datagrams received out-of-order
re_udp = r'([\d\.]*)\s*([KMG]?)bits/sec\s*[\d\.]*\s*ms\s*(\d*)/\s*(\d*) '
match = re.search(re_udp, cmd_out)
if match:
bdw = float(match.group(1))
bdw_unit = match.group(2)
drop = float(match.group(3))
pkt = int(match.group(4))
# iperf uses multiple of 1000 for K - not 1024
return [self.parse_results('UDP',
get_bdw_kbps(bdw, bdw_unit),
lossrate=round(drop * 100 / pkt, 2),
msg_size=length,
cpu_load=cpu_load)]
else:
# TCP output:
# [ 3] local 127.0.0.1 port 57936 connected with 127.0.0.1 port 5001
# [ ID] Interval Transfer Bandwidth
# [ 3] 0.0-10.0 sec 2.09 GBytes 1.79 Gbits/sec
#
# For bi-directional option (-r), last 3 lines:
# [ 5] 0.0-10.0 sec 36.0 GBytes 31.0 Gbits/sec
# [ 4] local 127.0.0.1 port 5002 connected with 127.0.0.1 port 39118
# [ 4] 0.0-10.0 sec 36.0 GBytes 30.9 Gbits/sec
re_tcp = r'Bytes\s*([\d\.]*)\s*([KMG])bits/sec'
match = re.search(re_tcp, cmd_out)
if match:
bdw = float(match.group(1))
bdw_unit = match.group(2)
res = [self.parse_results('TCP',
get_bdw_kbps(bdw, bdw_unit),
msg_size=length,
cpu_load=cpu_load)]
if bidirectional:
# decode the last row results
re_tcp = r'Bytes\s*([\d\.]*)\s*([KMG])bits/sec$'
match = re.search(re_tcp, cmd_out)
if match:
bdw = float(match.group(1))
bdw_unit = match.group(2)
# use the same cpu load since the same run
# does both directions
res.append(self.parse_results('TCP',
get_bdw_kbps(bdw, bdw_unit),
reverse_dir=True,
msg_size=length,
cpu_load=cpu_load))
return res
return [self.parse_error('Could not parse: %s' % (cmd_out))]

443
monitor.py Executable file
View File

@ -0,0 +1,443 @@
#!/usr/bin/env python
# Copyright 2014 Cisco Systems, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
'''
Module for parsing statistical output from Ganglia (gmond) server
The module opens a socket connection to collect statistical data.
It parses the raw data in xml format.
The data from ganglia/gmond is in a heirarchical xml format as below:
<CLUSTER>
<HOST..>
<METRIC ../>
<METRIC ../>
:
</HOST>
:
<HOST..>
<METRIC ../>
<METRIC ../>
</HOST>
</CLUSTER>
## Usage:
Using the module is simple.
1. instantiate the Monitor with the gmond server ip and port to poll.
gmon = Monitor("172.22.191.151", 8649)
2. Start the monitoring thread
gmon.start_monitoring_thread(frequency, count)
< run tests/tasks>
gmon.stop_monitoring_thread()
3. Collecting stats:
cpu_metric = gmon.build_cpu_metric()
Returns a dictionary object with all the cpu stats for each
node
'''
import datetime
import re
import socket
import subprocess
from threading import Thread
import time
from lxml import etree
class MonitorExecutor(Thread):
'''
Thread handler class to asynchronously collect stats
'''
THREAD_STOPPED = 0
THREAD_RUNNING = 1
def __init__(self, gmond_svr, gmond_port, freq=5, count=5):
super(MonitorExecutor, self).__init__()
self.gmond_svr_ip = gmond_svr
self.gmond_port = gmond_port
self.freq = freq
self.count = count
self.force_stop = False
self.thread_status = MonitorExecutor.THREAD_STOPPED
# This dictionary always holds the latest metric.
self.gmond_parsed_tree_list = []
def run(self):
'''
The thread runnable method.
The function will periodically poll the gmond server and
collect the metrics.
'''
self.thread_status = MonitorExecutor.THREAD_RUNNING
count = self.count
while count > 0:
if self.force_stop:
self.thread_status = MonitorExecutor.THREAD_STOPPED
return
self.parse_gmond_xml_data()
count -= 1
time.sleep(self.freq)
self.thread_status = MonitorExecutor.THREAD_STOPPED
def set_force_stop(self):
'''
Setting the force stop flag to stop the thread. By default
the thread stops after the specific count/iterations is reached
'''
self.force_stop = True
def parse_gmond_xml_data(self):
'''
Parse gmond data (V2)
Retrieve the ganglia stats from the aggregation node
:return: None in case of error or a dictionary containing the stats
'''
gmond_parsed_tree = {}
raw_data = self.retrieve_stats_raw()
if raw_data is None or len(raw_data) == 0:
print "Failed to retrieve stats from server"
return
xtree = etree.XML(raw_data)
############################################
# Populate cluster information.
############################################
for elem in xtree.iter('CLUSTER'):
gmond_parsed_tree['CLUSTER-NAME'] = str(elem.get('NAME'))
gmond_parsed_tree['LOCALTIME'] = str(elem.get('LOCALTIME'))
gmond_parsed_tree['URL'] = str(elem.get('URL'))
host_list = []
for helem in elem.iterchildren():
host = {}
host['NAME'] = str(helem.get('NAME'))
host['IP'] = str(helem.get('IP'))
host['REPORTED'] = str(helem.get('REPORTED'))
host['TN'] = str(helem.get('TN'))
host['TMAX'] = str(helem.get('TMAX'))
host['DMAX'] = str(helem.get('DMAX'))
host['LOCATION'] = str(helem.get('LOCATION'))
host['GMOND_STARTED'] = str(helem.get('GMOND_STARTED'))
mlist = []
for metric in helem.iterchildren():
mdic = {}
mdic['NAME'] = str(metric.get('NAME'))
mdic['VAL'] = str(metric.get('VAL'))
mlist.append(mdic)
host['metrics'] = mlist
host_list.append(host)
gmond_parsed_tree['hosts'] = host_list
stat_dt = datetime.datetime.now()
gmond_parsed_tree['dt'] = stat_dt
self.gmond_parsed_tree_list.append(gmond_parsed_tree)
def retrieve_stats_raw(self):
'''
Retrieve stats from the gmond process.
'''
soc = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
soc.settimeout(10)
try:
soc.connect((self.gmond_svr_ip, self.gmond_port))
except socket.error as exp:
print "Connection failure host: %s [%s]" % (self.gmond_svr_ip, exp)
return None
data = ""
while True:
try:
rbytes = soc.recv(4096)
except socket.error as exp:
print "Read failed for host: ", str(exp)
return None
if len(rbytes) == 0:
break
data += rbytes
soc.close()
return data
class Monitor(object):
gmond_svr_ip = None
gmond_port = None
gmond_parsed_tree = {}
def __init__(self, gmond_svr, gmond_port=8649):
'''
The constructor simply sets the values of the gmond server and port.
'''
self.gmond_svr_ip = gmond_svr
self.gmond_port = gmond_port
# List of all stats.
self.gmond_parsed_tree_list = []
# series for all cpu loads
self.cpu_res = {}
self.mon_thread = None
def start_monitoring_thread(self, freq=10, count=10):
'''
Start the monitoring thread.
'''
self.mon_thread = MonitorExecutor(self.gmond_svr_ip,
self.gmond_port, freq, count)
self.mon_thread.start()
def stop_monitoring_thread(self):
self.mon_thread.set_force_stop()
self.gmond_parsed_tree_list = self.mon_thread.gmond_parsed_tree_list
def strip_raw_telnet_output(self, raw_data):
'''
When using the retrieve_stats_raw_telent api, the raw data
has some additional text along with the xml data. We need to
strip that before we can invoke pass it through the lxml parser.
'''
data = ""
xml_flag = False
for line in raw_data.splitlines():
if re.match(r".*<?xml version.*", line):
xml_flag = True
if xml_flag:
data += line + "\n"
return data
def retrieve_stats_raw_telnet(self):
'''
This way of retrieval is to create a subprocess and execute
the telnet command on the port to retrieve the xml raw data.
'''
cmd = "telnet " + self.gmond_svr_ip + " " + str(self.gmond_port)
print "cmd: ", cmd
port = str(self.gmond_port)
proc = subprocess.Popen(["telnet", self.gmond_svr_ip, port],
stdout=subprocess.PIPE)
(output, _) = proc.communicate()
newout = self.strip_raw_telnet_output(output)
return newout
def get_host_list(self, gmond_parsed_tree):
'''
Function returns all the hosts {} as a list.
'''
return gmond_parsed_tree['hosts']
def get_metric_value(self, parsed_node, host_name, name):
'''
The function returns the value of a specific metric, given
the host name and the metric name to collect.
'''
for host in parsed_node['hosts']:
if host['NAME'] == host_name:
for metric in host['metrics']:
if metric['NAME'] == name:
return metric['VAL']
return 0
def get_aggregate_cpu_usage(self, parsed_node, host_name):
'''
The function returns the aggregate CPU usage for a specific host.
eqation: [user cpu + system cpu * no of cpu /100]
'''
cpu_user = float(self.get_metric_value(parsed_node, host_name, "cpu_user"))
cpu_system = float(self.get_metric_value(parsed_node, host_name, "cpu_system"))
cpu_num = int(self.get_metric_value(parsed_node, host_name, "cpu_num"))
return (cpu_user + cpu_system) * cpu_num / 100
def build_cpu_metrics(self):
'''Add a new set of cpu metrics to the results dictionary self.cpu_res
The result dest dictionary should look like this:
key = host IP, value = list of cpu load where the
the first value is the baseline value followed by 1 or more
values collected during the test
{
'10.0.0.1': [ 0.03, 1.23, 1.20 ],
'10.0.0.2': [ 0.10, 1.98, 2.72 ]
}
After another xml is decoded:
{
'10.0.0.1': [ 0.03, 1.23, 1.20, 1.41 ],
'10.0.0.2': [ 0.10, 1.98, 2.72, 2.04 ]
}
Each value in the list is the cpu load calculated as
(cpu_user + cpu_system) * num_cpu / 100
The load_five metric cannot be used as it is the average for last 5'
'''
cpu_res = {}
for parsed_node in self.gmond_parsed_tree_list:
for host in self.get_host_list(parsed_node):
host_ip = host['IP']
cpu_num = 0
cpu_user = 0.0
cpu_system = 0.0
cpu_user = float(self.get_metric_value(parsed_node, host['NAME'], "cpu_user"))
cpu_system = float(self.get_metric_value(parsed_node, host['NAME'], "cpu_system"))
cpu_num = int(self.get_metric_value(parsed_node, host['NAME'], "cpu_num"))
cpu_load = round(((cpu_user + cpu_system) * cpu_num) / 100, 2)
try:
cpu_res[host_ip].append(cpu_load)
except KeyError:
cpu_res[host_ip] = [cpu_load]
return cpu_res
def get_formatted_datetime(self, parsed_node):
'''
Returns the data in formated string. This is the
time when the last stat was collected.
'''
now = parsed_node['dt']
fmt_dt = "[" + str(now.hour) + ":" + str(now.minute) + \
":" + str(now.second) + "]"
return fmt_dt
def get_formatted_host_row(self, host_list):
'''
Returns the hosts in formated order (for printing purposes)
'''
row_str = "".ljust(10)
for host in host_list:
row_str += host['NAME'].ljust(15)
return row_str
def get_formatted_metric_row(self, parsed_node, metric, justval):
'''
Returns a specific metric for all hosts in the same row
in formated string (for printing)
'''
host_list = self.get_host_list(parsed_node)
row_str = metric.ljust(len(metric) + 2)
for host in host_list:
val = self.get_metric_value(parsed_node, host['NAME'], metric)
row_str += str(val).ljust(justval)
return row_str
def dump_cpu_stats(self):
'''
Print the CPU stats
'''
hl_len = 80
print "-" * hl_len
print "CPU Statistics: ",
for parsed_node in self.gmond_parsed_tree_list:
hosts = self.get_host_list(parsed_node)
print self.get_formatted_datetime(parsed_node)
print self.get_formatted_host_row(hosts)
print "-" * hl_len
print self.get_formatted_metric_row(parsed_node, "cpu_user", 18)
print self.get_formatted_metric_row(parsed_node, "cpu_system", 18)
print "Aggregate ",
for host in hosts:
print str(self.get_aggregate_cpu_usage(parsed_node,
host['NAME'])).ljust(16),
print "\n"
def dump_gmond_parsed_tree(self):
'''
Display the full tree parsed from the gmond server stats.
'''
hl_len = 60
for parsed_node in self.gmond_parsed_tree_list:
print "%-20s (%s) URL: %s " % \
(parsed_node['CLUSTER-NAME'],
parsed_node['LOCALTIME'],
parsed_node['URL'])
print "-" * hl_len
row_str = " ".ljust(9)
for host in parsed_node['hosts']:
row_str += host['NAME'].ljust(15)
row_str += "\n"
print row_str
print "-" * hl_len
metric_count = len(parsed_node['hosts'][0]['metrics'])
for count in range(0, metric_count):
row_str = ""
host = parsed_node['hosts'][0]
row_str += parsed_node['hosts'][0]['metrics'][count]['NAME'].ljust(18)
for host in parsed_node['hosts']:
val = str(self.get_metric_value(parsed_node, host['NAME'],
host['metrics'][count]['NAME']))
row_str += val.ljust(12)
row_str += str(parsed_node['hosts'][0]).ljust(5)
print row_str
##################################################
# Only invoke the module directly for test purposes. Should be
# invoked from pns script.
##################################################
def main():
print "main: monitor"
gmon = Monitor("172.22.191.151", 8649)
gmon.start_monitoring_thread(freq=5, count=20)
print "wait for 15 seconds"
time.sleep(20)
print "Now force the thread to stop"
gmon.stop_monitoring_thread()
gmon.dump_cpu_stats()
cpu_metric = gmon.build_cpu_metrics()
print "cpu_metric: ", cpu_metric
if __name__ == "__main__":
main()

262
network.py Executable file
View File

@ -0,0 +1,262 @@
# Copyright 2014 Cisco Systems, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import time
# Module containing a helper class for operating on OpenStack networks
import neutronclient.common.exceptions as neutron_exceptions
class Network(object):
#
# This constructor will try to find an external network (will use the
# first network that is tagged as external - irrespective of its name)
# and a router attached to it (irrespective of the router name).
# ext_router_name is the name of the external router to create if not None
# and if no external router is found
#
def __init__(self, neutron_client, config):
self.neutron_client = neutron_client
self.networks = neutron_client.list_networks()['networks']
self.ext_net = None
self.ext_router = None
self.ext_router_created = False
self.config = config
# mgmt/data network:
# - first for same network
# - second for network to network communication
self.vm_int_net = []
self.ext_router_name = None
# If reusing existing management network just find this network
if self.config.reuse_network_name:
# An existing management network must be reused
int_net = self.lookup_network(self.config.reuse_network_name)
self.vm_int_net.append(int_net)
return
##############################################
# If a user provided ext_net_name is not available,
# then find the first network that is external
##############################################
for network in self.networks:
if network['router:external']:
try:
if network['name'] == config.ext_net_name:
self.ext_net = network
break
if not self.ext_net:
self.ext_net = network
except KeyError:
###############################################
# A key error indicates, no user defined
# external network defined, so use the first one
###############################################
self.ext_net = network
break
if not self.ext_net:
print "No external network found."
return
print "Using external network: " + self.ext_net['name']
# Find or create the router to the external network
ext_net_id = self.ext_net['id']
routers = neutron_client.list_routers()['routers']
for router in routers:
external_gw_info = router['external_gateway_info']
if external_gw_info:
if external_gw_info['network_id'] == ext_net_id:
self.ext_router = router
print 'Found external router: %s' % \
(self.ext_router['name'])
break
# create a new external router if none found and a name was given
self.ext_router_name = config.router_name
if (not self.ext_router) and self.ext_router_name:
self.ext_router = self.create_router(self.ext_router_name,
self.ext_net['id'])
print '[%s] Created ext router' % (self.ext_router_name)
self.ext_router_created = True
# Create the 2 internal networks
for (net, subnet, cidr) in zip(config.internal_network_name,
config.internal_subnet_name,
config.internal_cidr):
int_net = self.create_net(net, subnet, cidr,
config.dns_nameservers)
self.vm_int_net.append(int_net)
# Add both internal networks to router interface to enable network to network connectivity
self.__add_router_interface()
# Create a network with associated subnet
# Check first if a network with the same name exists, if it exists
# return that network.
# dns_nameservers: a list of name servers e.g. ['8.8.8.8']
def create_net(self, network_name, subnet_name, cidr, dns_nameservers):
for network in self.networks:
if network['name'] == network_name:
print ('Found existing internal network: %s'
% (network_name))
return network
body = {
'network': {
'name': network_name,
'admin_state_up': True
}
}
network = self.neutron_client.create_network(body)['network']
body = {
'subnet': {
'name': subnet_name,
'cidr': cidr,
'network_id': network['id'],
'enable_dhcp': True,
'ip_version': 4,
'dns_nameservers': dns_nameservers
}
}
subnet = self.neutron_client.create_subnet(body)['subnet']
# add subnet id to the network dict since it has just been added
network['subnets'] = [subnet['id']]
print 'Created internal network: %s' % (network_name)
return network
# Delete a network and associated subnet
def delete_net(self, network):
if network:
name = network['name']
# it may take some time for ports to be cleared so we need to retry
for _ in range(1, 5):
try:
self.neutron_client.delete_network(network['id'])
print 'Network %s deleted' % (name)
break
except neutron_exceptions.NetworkInUseClient:
time.sleep(1)
# Add a network/subnet to a logical router
# Check that it is not already attached to the network/subnet
def __add_router_interface(self):
# and pick the first in the list - the list should be non empty and
# contain only 1 subnet since it is supposed to be a private network
# But first check that the router does not already have this subnet
# so retrieve the list of all ports, then check if there is one port
# - matches the subnet
# - and is attached to the router
# Assumed that both management networks are created together so checking for one of them
ports = self.neutron_client.list_ports()['ports']
for port in ports:
port_ip = port['fixed_ips'][0]
if (port['device_id'] == self.ext_router['id']) and \
(port_ip['subnet_id'] == self.vm_int_net[0]['subnets'][0]):
print 'Ext router already associated to the internal network'
return
for int_net in self.vm_int_net:
body = {
'subnet_id': int_net['subnets'][0]
}
self.neutron_client.add_interface_router(self.ext_router['id'], body)
if self.config.debug:
print 'Ext router associated to ' + int_net['name']
# Detach the ext router from the mgmt network
def __remove_router_interface(self):
for int_net in self.vm_int_net:
if int_net:
body = {
'subnet_id': int_net['subnets'][0]
}
self.neutron_client.remove_interface_router(self.ext_router['id'],
body)
# Lookup network given network name
def lookup_network(self, network_name):
networks = self.neutron_client.list_networks(name=network_name)
return networks['networks'][0]
# Create a router and up-date external gateway on router
# to external network
def create_router(self, router_name, net_id):
body = {
"router": {
"name": router_name,
"admin_state_up": True,
"external_gateway_info": {
"network_id": net_id
}
}
}
router = self.neutron_client.create_router(body)
return router['router']
# Show a router based on name
def show_router(self, router_name):
router = self.neutron_client.show_router(router_name)
return router
# Update a router given router and network id
def update_router(self, router_id, net_id):
print net_id
body = {
"router": {
"name": "pns-router",
"external_gateway_info": {
"network_id": net_id
}
}
}
router = self.neutron_client.update_router(router_id, body)
return router['router']
# Create a floating ip on the external network and return it
def create_floating_ip(self):
body = {
"floatingip": {
"floating_network_id": self.ext_net['id']
}
}
fip = self.neutron_client.create_floatingip(body)
return fip
# Delete floating ip given a floating ip ad
def delete_floating_ip(self, floatingip):
self.neutron_client.delete_floatingip(floatingip)
# Dispose all network resources, call after all VM have been deleted
def dispose(self):
# Delete the internal networks only of we did not reuse an existing
# network
if not self.config.reuse_network_name:
self.__remove_router_interface()
for int_net in self.vm_int_net:
self.delete_net(int_net)
# delete the router only if its name matches the pns router name
if self.ext_router_created:
try:
if self.ext_router['name'] == self.ext_router_name:
self.neutron_client.delete_router(self.ext_router['id'])
print 'External router %s deleted' % \
(self.ext_router['name'])
except TypeError:
print "No external router set"

194
nuttcp_tool.py Normal file
View File

@ -0,0 +1,194 @@
# Copyright 2014 Cisco Systems, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import re
from perf_tool import PerfTool
import sshutils
class NuttcpTool(PerfTool):
def __init__(self, instance, perf_tool_path):
PerfTool.__init__(self, 'nuttcp-7.3.2', perf_tool_path, instance)
def get_server_launch_cmd(self):
'''Return the commands to launch the server side.'''
return [self.dest_path + ' -P5002 -S --single-threaded &']
def run_client(self, target_ip, target_instance,
mss=None, bandwidth=0, bidirectional=False):
'''Run the test
:return: list containing one or more dictionary results
'''
res_list = []
if bidirectional:
reverse_dir_list = [False, True]
else:
reverse_dir_list = [False]
# Get list of protocols and packet sizes to measure
(proto_list, proto_pkt_sizes) = self.get_proto_profile()
for udp, pkt_size_list in zip(proto_list, proto_pkt_sizes):
for pkt_size in pkt_size_list:
for reverse_dir in reverse_dir_list:
# nuttcp does not support reverse dir for UDP...
if reverse_dir and udp:
continue
if udp:
self.instance.display('Measuring UDP Throughput (packet size=%d)...',
pkt_size)
loop_count = 1
else:
# For accuracy purpose, TCP throughput will be measured 3 times
self.instance.display('Measuring TCP Throughput (packet size=%d)...',
pkt_size)
loop_count = self.instance.config.tcp_tp_loop_count
for _ in xrange(loop_count):
res = self.run_client_dir(target_ip, mss,
reverse_dir=reverse_dir,
bandwidth_kbps=bandwidth,
udp=udp,
length=pkt_size)
res_list.extend(res)
# For UDP reverse direction we need to start the server on self.instance
# and run the client on target_instance
if bidirectional:
# Start the server on the client (this tool instance)
self.instance.display('Start UDP server for reverse dir')
if self.start_server():
# Start the client on the target instance
target_instance.display('Starting UDP client for reverse dir')
for pkt_size in self.instance.config.udp_pkt_sizes:
self.instance.display('Measuring UDP Throughput packet size=%d'
' (reverse direction)...',
pkt_size)
res = target_instance.tp_tool.run_client_dir(self.instance.internal_ip,
mss,
bandwidth_kbps=bandwidth,
udp=True,
length=pkt_size)
res[0]['direction'] = 'reverse'
res_list.extend(res)
else:
self.instance.display('Failed to start UDP server for reverse dir')
return res_list
def run_client_dir(self, target_ip,
mss,
reverse_dir=False,
bandwidth_kbps=0,
udp=False,
length=0,
no_cpu_timed=0):
'''Run client in one direction
:param reverse_dir: True if reverse the direction (tcp only for now)
:param bandwidth_kbps: transmit rate limit in Kbps
:param udp: if true get UDP throughput, else get TCP throughput
:param length: length of network write|read buf (default 1K|8K/udp, 64K/tcp)
for udp is the packet size
:param no_cpu_timed: if non zero will disable cpu collection and override
the time with the provided value - used mainly for udp
to find quickly the optimal throughput using short
tests at various throughput values
:return: a list of 1 dictionary with the results (see parse_results())
'''
# run client using the default TCP window size (tcp window
# scaling is normally enabled by default so setting explicit window
# size is not going to help achieve better results)
opts = ''
if mss:
opts += "-M" + str(mss)
if reverse_dir:
opts += " -F -r"
if length:
opts += " -l" + str(length)
if udp:
opts += " -u"
# for UDP if the bandwidth is not provided we need to calculate
# the optimal bandwidth
if not bandwidth_kbps:
udp_res = self.find_udp_bdw(length, target_ip)
if 'error' in udp_res:
return [udp_res]
if not self.instance.gmond_svr:
# if we do not collect CPU we miught as well return
# the results found through iteration
return [udp_res]
bandwidth_kbps = udp_res['throughput_kbps']
if bandwidth_kbps:
opts += " -R%sK" % (bandwidth_kbps)
if no_cpu_timed:
duration_sec = no_cpu_timed
else:
duration_sec = self.instance.get_cmd_duration()
# use data port 5001 and control port 5002
# must be enabled in the VM security group
cmd = "%s -T%d %s -p5001 -P5002 -fparse %s" % (self.dest_path,
duration_sec,
opts,
target_ip)
self.instance.buginf(cmd)
try:
if no_cpu_timed:
# force the timeout value with 20 second extra for the command to
# complete and do not collect CPU
cpu_load = None
cmd_out = self.instance.exec_command(cmd, duration_sec + 20)
else:
(cmd_out, cpu_load) = self.instance.exec_with_cpu(cmd)
except sshutils.SSHError as exc:
# Timout or any SSH error
self.instance.display('SSH Error:' + str(exc))
return [self.parse_error(str(exc))]
if udp:
# UDP output (unicast and multicast):
# megabytes=1.1924 real_seconds=10.01 rate_Mbps=0.9997 tx_cpu=99 rx_cpu=0
# drop=0 pkt=1221 data_loss=0.00000
re_udp = r'rate_Mbps=([\d\.]*) tx_cpu=\d* rx_cpu=\d* drop=(\d*) pkt=(\d*)'
match = re.search(re_udp, cmd_out)
if match:
rate_mbps = float(match.group(1))
drop = float(match.group(2))
pkt = int(match.group(3))
# nuttcp uses multiple of 1000 for Kbps - not 1024
return [self.parse_results('UDP',
int(rate_mbps * 1000),
lossrate=round(drop * 100 / pkt, 2),
reverse_dir=reverse_dir,
msg_size=length,
cpu_load=cpu_load)]
else:
# TCP output:
# megabytes=1083.4252 real_seconds=10.04 rate_Mbps=905.5953 tx_cpu=3 rx_cpu=19
# retrans=0 rtt_ms=0.55
re_tcp = r'rate_Mbps=([\d\.]*) tx_cpu=\d* rx_cpu=\d* retrans=(\d*) rtt_ms=([\d\.]*)'
match = re.search(re_tcp, cmd_out)
if match:
rate_mbps = float(match.group(1))
retrans = int(match.group(2))
rtt_ms = float(match.group(3))
return [self.parse_results('TCP',
int(rate_mbps * 1000),
retrans=retrans,
rtt_ms=rtt_ms,
reverse_dir=reverse_dir,
msg_size=length,
cpu_load=cpu_load)]
return [self.parse_error('Could not parse: %s' % (cmd_out))]

6
openstack-common.conf Normal file
View File

@ -0,0 +1,6 @@
[DEFAULT]
# The list of modules to copy from oslo-incubator.git
# The base module to hold the copy of openstack.common
base=vmtp

105
perf_instance.py Normal file
View File

@ -0,0 +1,105 @@
# Copyright 2014 Cisco Systems, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from instance import Instance as Instance
from perf_tool import PingTool
class PerfInstance(Instance):
'''An openstack instance to run performance tools
'''
def __init__(self, name, config, comp=None, net=None, server=False):
Instance.__init__(self, name, config, comp, net)
self.is_server = server
if 'I' in config.protocols:
self.ping = PingTool(self)
else:
self.ping = None
if config.tp_tool:
self.tp_tool = config.tp_tool(self, config.perf_tool_path)
else:
self.tp_tool = None
# No args is reserved for native host server
def create(self, image=None, flavor_type=None,
keypair=None, nics=None, az=None,
management_network_name=None,
sec_group=None,
init_file_name=None):
'''Create an instance
:return: True on success, False on error
'''
rc = Instance.create(self, image, flavor_type, keypair,
nics, az,
management_network_name,
sec_group,
init_file_name)
if not rc:
return False
if self.tp_tool and not self.tp_tool.install():
return False
if not self.is_server:
return True
if self.tp_tool and not self.tp_tool.start_server():
return False
return True
def run_client(self, label, dest_ip, target_instance, mss=None,
bandwidth=0,
bidirectional=False,
az_to=None):
'''test iperf client using the default TCP window size
(tcp window scaling is normally enabled by default so setting explicit window
size is not going to help achieve better results)
:return: a dictionary containing the results of the run
'''
# Latency (ping rtt)
if 'I' in self.config.protocols:
ping_res = self.ping.run_client(dest_ip)
else:
ping_res = None
# TCP/UDP throughput with tp_tool, returns a list of dict
if self.tp_tool and 'error' not in ping_res:
tp_tool_res = self.tp_tool.run_client(dest_ip,
target_instance,
mss=mss,
bandwidth=bandwidth,
bidirectional=bidirectional)
else:
tp_tool_res = []
res = {'ip_to': dest_ip}
if self.internal_ip:
res['ip_from'] = self.internal_ip
if label:
res['desc'] = label
if self.az:
res['az_from'] = self.az
if az_to:
res['az_to'] = az_to
res['distro_id'] = self.ssh.distro_id
res['distro_version'] = self.ssh.distro_version
# consolidate results for all tools
if ping_res:
tp_tool_res.append(ping_res)
res['results'] = tp_tool_res
return res
# Override in order to terminate the perf server
def dispose(self):
if self.tp_tool:
self.tp_tool.dispose()
Instance.dispose(self)

289
perf_tool.py Normal file
View File

@ -0,0 +1,289 @@
# Copyright 2014 Cisco Systems, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import abc
import os
import re
# where to copy the tool on the target, must end with slash
SCP_DEST_DIR = '/tmp/'
#
# A base class for all tools that can be associated to an instance
#
class PerfTool(object):
__metaclass__ = abc.ABCMeta
def __init__(self, name, perf_tool_path, instance):
self.name = name
self.instance = instance
self.dest_path = SCP_DEST_DIR + name
self.pid = None
self.perf_tool_path = perf_tool_path
# install the tool to the instance
# returns False if fail, True if success
def install(self):
if self.perf_tool_path:
local_path = os.path.join(self.perf_tool_path, self.name)
return self.instance.scp(self.name, local_path, self.dest_path)
# no install needed
return True
@abc.abstractmethod
def get_server_launch_cmd(self):
'''To be implemented by sub-classes.'''
return None
def start_server(self):
'''Launch the server side of this tool
:return: True if success, False if error
'''
# check if server is already started
if not self.pid:
self.pid = self.instance.ssh.pidof(self.name)
if not self.pid:
cmd_list = self.get_server_launch_cmd()
# Start the tool server
self.instance.buginf('Starting %s server...' % (self.name))
for launch_cmd in cmd_list:
launch_out = self.instance.exec_command(launch_cmd)
self.pid = self.instance.ssh.pidof(self.name)
else:
self.instance.buginf('%s server already started pid=%s' % (self.name, self.pid))
if self.pid:
return True
else:
self.instance.display('Cannot launch server %s: %s' % (self.name, launch_out))
return False
# Terminate pid if started
def dispose(self):
if self.pid:
# Terminate the iperf server
self.instance.buginf('Terminating %s', self.name)
self.instance.ssh.kill_proc(self.pid)
self.pid = None
def parse_error(self, msg):
return {'error': msg, 'tool': self.name}
def parse_results(self, protocol, throughput, lossrate=None, retrans=None,
rtt_ms=None, reverse_dir=False,
msg_size=None,
cpu_load=None):
res = {'throughput_kbps': throughput,
'protocol': protocol,
'tool': self.name}
if self.instance.config.vm_bandwidth:
res['bandwidth_limit_kbps'] = self.instance.config.vm_bandwidth
if lossrate is not None:
res['loss_rate'] = lossrate
if retrans:
res['retrans'] = retrans
if rtt_ms:
res['rtt_ms'] = rtt_ms
if reverse_dir:
res['direction'] = 'reverse'
if msg_size:
res['pkt_size'] = msg_size
if cpu_load:
res['cpu_load'] = cpu_load
return res
@abc.abstractmethod
def run_client_dir(self, target_ip,
mss,
reverse_dir=False,
bandwidth_kbps=0,
udp=False,
length=0,
no_cpu_timed=0):
# must be implemented by sub classes
return None
def find_udp_bdw(self, pkt_size, target_ip):
'''Find highest UDP bandwidth within max loss rate for given packet size
:return: a dictionary describing the optimal bandwidth (see parse_results())
'''
# we use a binary search to converge to the optimal throughput
# start with 5Gbps - mid-range between 1 and 10Gbps
# Convergence can be *very* tricky because UDP throughput behavior
# can vary dramatically between host runs and guest runs.
# The packet rate limitation is going to dictate the effective
# send rate, meaning that small packet sizes will yield the worst
# throughput.
# The measured throughput can be vastly smaller than the requested
# throughput even when the loss rate is zero when the sender cannot
# send fast enough to fill the network, in that case increasing the
# requested rate will not make it any better
# Examples:
# 1. too much difference between requested/measured bw - regardless of loss rate
# => retry with bw mid-way between the requested bw and the measured bw
# /tmp/nuttcp-7.3.2 -T2 -u -l128 -R5000000K -p5001 -P5002 -fparse 192.168.1.2
# megabytes=36.9785 real_seconds=2.00 rate_Mbps=154.8474 tx_cpu=23 rx_cpu=32
# drop=78149 pkt=381077 data_loss=20.50746
# /tmp/nuttcp-7.3.2 -T2 -u -l128 -R2500001K -p5001 -P5002 -fparse 192.168.1.2
# megabytes=47.8063 real_seconds=2.00 rate_Mbps=200.2801 tx_cpu=24 rx_cpu=34
# drop=0 pkt=391629 data_loss=0.00000
# 2. measured and requested bw are very close :
# if loss_rate is too low
# increase bw mid-way between requested and last max bw
# if loss rate is too high
# decrease bw mid-way between the measured bw and the last min bw
# else stop iteration (converged)
# /tmp/nuttcp-7.3.2 -T2 -u -l8192 -R859376K -p5001 -P5002 -fparse 192.168.1.2
# megabytes=204.8906 real_seconds=2.00 rate_Mbps=859.2992 tx_cpu=99 rx_cpu=10
# drop=0 pkt=26226 data_loss=0.00000
min_kbps = 1
max_kbps = 10000000
kbps = 5000000
min_loss_rate = self.instance.config.udp_loss_rate_range[0]
max_loss_rate = self.instance.config.udp_loss_rate_range[1]
# stop if the remaining range to cover is less than 5%
while (min_kbps * 100 / max_kbps) < 95:
res_list = self.run_client_dir(target_ip, 0, bandwidth_kbps=kbps,
udp=True, length=pkt_size,
no_cpu_timed=1)
# always pick the first element in the returned list of dict(s)
# should normally only have 1 element
res = res_list[0]
if 'error' in res:
return res
loss_rate = res['loss_rate']
measured_kbps = res['throughput_kbps']
self.instance.buginf('pkt-size=%d throughput=%d<%d/%d<%d Kbps loss-rate=%d' %
(pkt_size, min_kbps, measured_kbps, kbps, max_kbps, loss_rate))
# expected rate must be at least 80% of the requested rate
if (measured_kbps * 100 / kbps) < 80:
# the measured bw is too far away from the requested bw
# take half the distance or 3x the measured bw whichever is lowest
kbps = min(measured_kbps + (kbps - measured_kbps) / 2,
measured_kbps * 3)
max_kbps = kbps
continue
# The measured bw is within striking distance from the requested bw
# increase bw if loss rate is too small
if loss_rate < min_loss_rate:
# undershot
if measured_kbps > min_kbps:
min_kbps = measured_kbps
else:
# to make forward progress we need to increase min_kbps
# and try a higher bw since the loss rate is too low
min_kbps = int((max_kbps + min_kbps) / 2)
kbps = int((max_kbps + min_kbps) / 2)
# print ' undershot, min=%d kbps=%d max=%d' % (min_kbps, kbps, max_kbps)
elif loss_rate > max_loss_rate:
# overshot
max_kbps = kbps
if measured_kbps < kbps:
kbps = measured_kbps
else:
kbps = int((max_kbps + min_kbps) / 2)
# print ' overshot, min=%d kbps=%d max=%d' % (min_kbps, kbps, max_kbps)
else:
# converged within loss rate bracket
break
return res
def get_proto_profile(self):
'''Return a tuple containing the list of protocols (tcp/udp) and
list of packet sizes (udp only)
'''
# start with TCP (udp=False) then UDP
proto_list = []
proto_pkt_sizes = []
if 'T' in self.instance.config.protocols:
proto_list.append(False)
proto_pkt_sizes.append(self.instance.config.tcp_pkt_sizes)
if 'U' in self.instance.config.protocols:
proto_list.append(True)
proto_pkt_sizes.append(self.instance.config.udp_pkt_sizes)
return (proto_list, proto_pkt_sizes)
class PingTool(PerfTool):
'''
A class to run ping and get loss rate and round trip time
'''
def __init__(self, instance):
PerfTool.__init__(self, 'ping', None, instance)
def run_client(self, target_ip, ping_count=5):
'''Perform the ping operation
:return: a dict containing the results stats
Example of output:
10 packets transmitted, 10 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 55.855/66.074/103.915/13.407 ms
or
5 packets transmitted, 5 received, 0% packet loss, time 3998ms
rtt min/avg/max/mdev = 0.455/0.528/0.596/0.057 ms
'''
cmd = "ping -c " + str(ping_count) + " " + str(target_ip)
cmd_out = self.instance.exec_command(cmd)
if not cmd_out:
res = {'protocol': 'ICMP',
'tool': 'ping',
'error': 'failed'}
return res
match = re.search(r'(\d*) packets transmitted, (\d*) ',
cmd_out)
if match:
tx_packets = match.group(1)
rx_packets = match.group(2)
else:
tx_packets = 0
rx_packets = 0
match = re.search(r'min/avg/max/[a-z]* = ([\d\.]*)/([\d\.]*)/([\d\.]*)/([\d\.]*)',
cmd_out)
if match:
rtt_min = match.group(1)
rtt_avg = match.group(2)
rtt_max = match.group(3)
rtt_stddev = match.group(4)
else:
rtt_min = 0
rtt_max = 0
rtt_avg = 0
rtt_stddev = 0
res = {'protocol': 'ICMP',
'tool': 'ping',
'tx_packets': tx_packets,
'rx_packets': rx_packets,
'rtt_min_ms': rtt_min,
'rtt_max_ms': rtt_max,
'rtt_avg_ms': rtt_avg,
'rtt_stddev': rtt_stddev}
return res
def get_server_launch_cmd(self):
# not applicable
return None
def run_client_dir(self, target_ip,
mss,
reverse_dir=False,
bandwidth_kbps=0,
udp=False,
length=0,
no_cpu_timed=0):
# not applicable
return None

142
pns_mongo.py Executable file
View File

@ -0,0 +1,142 @@
#!/usr/bin/env python
# Copyright 2014 Cisco Systems, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import pymongo
def connect_to_mongod(mongod_ip, mongod_port):
'''
Create a connection to the mongo deamon.
'''
if mongod_ip is None:
mongod_ip = "localhost"
if mongod_port is None:
mongod_port = 27017
client = None
try:
client = pymongo.MongoClient(mongod_ip, mongod_port)
except pymongo.errors.ConnectionFailure:
print "ERROR: pymongo. Connection Failure (%s) (%d)" % \
(mongod_ip, mongod_port)
return None
return client
def get_mongod_collection(db_client, database_name, collection_name):
'''
Given db name and collection name, get the collection object.
'''
mongo_db = db_client[database_name]
if mongo_db is None:
print "Invalid database name"
return None
collection = mongo_db[collection_name]
if collection is None:
return None
return collection
def is_type_dict(var):
if isinstance(var, dict):
return True
return False
def add_new_document_to_collection(collection, document):
if collection is None:
print "collection cannot be none"
return None
if not is_type_dict(document):
print "Document type should be a dictionary"
return None
post_id = collection.insert(document)
return post_id
def search_documents_in_collection(collection, pattern):
if collection is None:
print "collection cannot be None"
return None
if pattern is None:
pattern = {}
if not is_type_dict(pattern):
print "pattern type should be a dictionary"
return None
try:
output = collection.find(pattern)
except TypeError:
print "A TypeError occured. Invalid pattern: ", pattern
return None
return output
def pns_add_test_result_to_mongod(mongod_ip,
mongod_port, pns_database,
pns_collection, document):
'''
Invoked from vmtp to add a new result to the mongod database.
'''
client = connect_to_mongod(mongod_ip, mongod_port)
if client is None:
print "ERROR: Failed to connect to mongod (%s) (%d)" % \
(mongod_ip, mongod_port)
return None
collection = get_mongod_collection(client, pns_database, pns_collection)
if collection is None:
print "ERROR: Failed to get collection DB: %s, %s" % \
(pns_database, pns_collection)
return None
post_id = add_new_document_to_collection(collection, document)
return post_id
def pns_search_results_from_mongod(mongod_ip, mongod_port,
pns_database, pns_collection,
pattern):
'''
Can be invoked from a helper script to query the mongod database
'''
client = connect_to_mongod(mongod_ip, mongod_port)
if client is None:
print "ERROR: Failed to connect to mongod (%s) (%d)" % \
(mongod_ip, mongod_port)
return
collection = get_mongod_collection(client, pns_database, pns_collection)
if collection is None:
print "ERROR: Failed to get collection DB: %s, %s" % \
(pns_database, pns_collection)
return
docs = search_documents_in_collection(collection, pattern)
return docs

328
pnsdb_summary.py Executable file
View File

@ -0,0 +1,328 @@
#!/usr/bin/env python
# Copyright 2014 Cisco Systems, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import argparse
import re
import sys
import pns_mongo
import tabulate
###########################################
# Global list of all result functions
# that are displayed as a menu/list.
###########################################
pnsdb_results_list = [
("Summary of all results", "show_summary_all"),
("Show TCP results for vlan encap", "show_tcp_summary_encap_vlan"),
("Show UDP results for vlan encap", "show_udp_summary_encap_vlan"),
]
network_type = [
(0, "L2 Network"),
(1, "L3 Network"),
(100, "Unknown"),
]
vm_loc = [
(0, "Intra-node"),
(1, "Inter-node"),
]
flow_re = re.compile(r".*(same|different) network.*(fixed|floating).*"
"IP.*(inter|intra).*",
re.IGNORECASE)
def get_flow_type(flow_desc):
vm_location = None
nw_type = None
fixed_ip = None
mobj = flow_re.match(flow_desc)
if mobj:
if mobj.group(1) == "same":
nw_type = network_type[0][0]
elif mobj.group(1) == "different":
nw_type = network_type[1][0]
else:
nw_type = network_type[2][0]
if mobj.group(2) == "fixed":
fixed_ip = True
else:
fixed_ip = False
if mobj.group(3) == "inter":
vm_location = vm_loc[1][0]
else:
vm_location = vm_loc[0][0]
return(vm_location, nw_type, fixed_ip)
def get_tcp_flow_data(data):
record_list = []
for record in data:
for flow in record['flows']:
results = flow['results']
get_flow_type(flow['desc'])
for result in results:
show_record = {}
if result['protocol'] == "TCP" or result['protocol'] == "tcp":
show_record['throughput_kbps'] = result['throughput_kbps']
show_record['rtt_ms'] = result['rtt_ms']
show_record['pkt_size'] = result['pkt_size']
show_record['openstack_version'] = record['openstack_version']
show_record['date'] = record['date']
show_record['distro'] = record['distro']
# show_record['desc'] = flow['desc']
record_list.append(show_record)
return record_list
def get_udp_flow_data(data):
record_list = []
for record in data:
for flow in record['flows']:
results = flow['results']
get_flow_type(flow['desc'])
for result in results:
show_record = {}
if result['protocol'] == "UDP" or result['protocol'] == "udp":
show_record['throughput_kbps'] = result['throughput_kbps']
show_record['loss_rate'] = result['loss_rate']
show_record['openstack_version'] = record['openstack_version']
show_record['date'] = record['date']
show_record['distro'] = record['distro']
# show_record['desc'] = flow['desc']
record_list.append(show_record)
return record_list
def show_pnsdb_summary(db_server, db_port, db_name, db_collection):
'''
Show a summary of results.
'''
pattern = {}
data = pns_mongo.pns_search_results_from_mongod(db_server,
db_port,
db_name,
db_collection,
pattern)
record_list = get_tcp_flow_data(data)
print tabulate.tabulate(record_list, headers="keys", tablefmt="grid")
print data.count()
data = pns_mongo.pns_search_results_from_mongod(db_server,
db_port,
db_name,
db_collection,
pattern)
record_list = get_udp_flow_data(data)
print "UDP:"
print tabulate.tabulate(record_list, headers="keys", tablefmt="grid")
def get_results_info(results, cols, protocol=None):
result_list = []
for result in results:
show_result = {}
if protocol is not None:
if result['protocol'] != protocol:
continue
for col in cols:
if col in result.keys():
show_result[col] = result[col]
result_list.append(show_result)
return result_list
def get_flow_info(flow, cols):
flow_list = []
show_flow = {}
for col in cols:
show_flow[col] = flow[col]
(vmloc, nw_type, fixed_ip) = get_flow_type(flow['desc'])
show_flow['nw_type'] = network_type[nw_type][1]
show_flow['vm_loc'] = vm_loc[vmloc][1]
if fixed_ip:
show_flow['fixed_float'] = "Fixed IP"
else:
show_flow['fixed_float'] = "Floating IP"
flow_list.append(show_flow)
return flow_list
def get_record_info(record, cols):
record_list = []
show_record = {}
for col in cols:
show_record[col] = record[col]
record_list.append(show_record)
return record_list
def print_record_header(record):
print "#" * 60
print "RUN: %s" % (record['date'])
cols = ['date', 'distro', 'openstack_version', 'encapsulation']
record_list = get_record_info(record, cols)
print tabulate.tabulate(record_list)
def print_flow_header(flow):
cols = ['desc']
flow_list = get_flow_info(flow, cols)
print tabulate.tabulate(flow_list, tablefmt="simple")
def show_tcp_summary_encap_vlan(db_server, db_port, db_name, db_collection):
pattern = {"encapsulation": "vlan"}
data = pns_mongo.pns_search_results_from_mongod(db_server,
db_port,
db_name,
db_collection,
pattern)
for record in data:
print_record_header(record)
for flow in record['flows']:
print_flow_header(flow)
cols = ['throughput_kbps', 'protocol', 'tool', 'rtt_ms']
result_list = get_results_info(flow['results'], cols,
protocol="TCP")
print tabulate.tabulate(result_list,
headers="keys", tablefmt="grid")
print "\n"
def show_udp_summary_encap_vlan(db_server, db_port, db_name, db_collection):
pattern = {"encapsulation": "vlan"}
data = pns_mongo.pns_search_results_from_mongod(db_server,
db_port,
db_name,
db_collection,
pattern)
for record in data:
print_record_header(record)
for flow in record['flows']:
print_flow_header(flow)
cols = ['throughput_kbps', 'protocol', 'loss_rate', 'pkt_size']
result_list = get_results_info(flow['results'], cols,
protocol="UDP")
print tabulate.tabulate(result_list,
headers="keys", tablefmt="grid")
def show_summary_all(db_server, db_port, db_name, db_collection):
pattern = {}
print "-" * 60
print "Summary Data: "
print "-" * 60
data = pns_mongo.pns_search_results_from_mongod(db_server,
db_port,
db_name,
db_collection,
pattern)
for record in data:
print_record_header(record)
for flow in record['flows']:
print_flow_header(flow)
# Display the results for each flow.
cols = ['throughput_kbps', 'protocol', 'tool',
'rtt_ms', 'loss_rate', 'pkt_size',
'rtt_avg_ms']
result_list = get_results_info(flow['results'], cols)
print tabulate.tabulate(result_list,
headers="keys", tablefmt="grid")
print "\n"
def main():
####################################################################
# parse arguments.
# --server-ip [required]
# --server-port [optional] [default: 27017]
# --official [optional]
####################################################################
parser = argparse.ArgumentParser(description="VMTP Results formatter")
parser.add_argument('-s', "--server-ip", dest="server_ip",
action="store",
help="MongoDB Server IP address")
parser.add_argument('-p', "--server-port", dest="server_port",
action="store",
help="MongoDB Server port (default 27017)")
parser.add_argument("-o", "--official", default=False,
action="store_true",
help="Access offcial results collection")
(opts, _) = parser.parse_known_args()
if not opts.server_ip:
print "Provide the pns db server ip address"
sys.exit()
db_server = opts.server_ip
if not opts.server_port:
db_port = 27017
else:
db_port = opts.server_port
db_name = "pnsdb"
if opts.official:
print "Use db collection officialdata"
db_collection = "officialdata"
else:
db_collection = "testdata"
print "-" * 40
print "Reports Menu:"
print "-" * 40
count = 0
for option in pnsdb_results_list:
print "%d: %s" % (count, option[0])
count += 1
print "\n"
try:
user_opt = int(raw_input("Choose a report [no] : "))
except ValueError:
print "Invalid option"
sys.exit()
globals()[pnsdb_results_list[user_opt][1]](db_server,
db_port, db_name, db_collection)
if __name__ == '__main__':
main()

17
pylintrc Normal file
View File

@ -0,0 +1,17 @@
[BASIC]
# Allow constant names to be lower case
const-rgx=[a-zA-Z_][a-zA-Z0-9_]{2,30}$
module-rgx=[a-zA-Z_][a-zA-Z0-9_]{2,30}$
max-line-length=100
max-args=10
max-branches=20
max-locals=20
good-names=az,ip,_,rc
max-statements=100
[MESSAGE CONTROL]
disable=missing-docstring,too-many-public-methods,too-many-instance-attributes,star-args,pointless-string-statement,no-self-use,too-many-locals,superfluous-parens,too-few-public-methods,unused-argument
[SIMILARITIES]
min-similarity-lines=10

5
requirements-dev.txt Normal file
View File

@ -0,0 +1,5 @@
-r requirements.txt
git-review>=1.24
pylint>=1.3
pep8>=1.5.7

20
requirements.txt Normal file
View File

@ -0,0 +1,20 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr>=0.6,!=0.7,<1.0
Babel>=1.3
configure>=0.5
ecdsa>=0.11
lxml>=3.4.0
oslo.utils>=1.2.0
paramiko>=1.14.0
pycrypto>=2.6.1
pymongo>=2.7.2
python-neutronclient<3,>=2.3.6
python-novaclient>=2.18.1
python-openstackclient>=0.4.1
python-keystoneclient>=1.0.0
scp>=0.8.0
tabulate>=0.7.3

2
run_tests.sh Normal file
View File

@ -0,0 +1,2 @@
#! /bin/bash
vmtp.py -h

47
setup.cfg Normal file
View File

@ -0,0 +1,47 @@
[metadata]
name = vmtp
summary = A data path performance tool for OpenStack clouds.
description-file =
README.rst
author = OpenStack
author-email = openstack-dev@lists.openstack.org
home-page = http://www.openstack.org/
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 2.6
Programming Language :: Python :: 3
Programming Language :: Python :: 3.3
Programming Language :: Python :: 3.4
[files]
packages =
vmtp
[build_sphinx]
source-dir = doc/source
build-dir = doc/build
all_files = 1
[upload_sphinx]
upload-dir = doc/build/html
[compile_catalog]
directory = vmtp/locale
domain = vmtp
[update_catalog]
domain = vmtp
output_dir = vmtp/locale
input_file = vmtp/locale/vmtp.pot
[extract_messages]
keywords = _ gettext ngettext l_ lazy_gettext
mapping_file = babel.cfg
output_file = vmtp/locale/vmtp.pot

22
setup.py Executable file
View File

@ -0,0 +1,22 @@
#!/usr/bin/env python
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
setuptools.setup(
setup_requires=['pbr'],
pbr=True)

27
ssh/id_rsa Normal file
View File

@ -0,0 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAu1wjIM/GgPbbLPoyKfN+I1uSeqrF4PYcbsTcHaQ6mEF/Ufqe
6uZVJFR1mT1ECfxCckUMM6aaf5ESNAfxEjE9Hrs/Yd3Qw5hwSlG3HJ4uZg79m1yg
ifXfnkp4rGNI6sqZxgyMbeQW3KoDaQ6zOe7e/KIkxX4XzR8I4d5lRx4ofIb806AB
QJkDrq48dHLGZnOBPvpNImpg5u6EWHGa4HI4Dl2pdyEQXTXOErupalOC1Cr8oxwu
fimSxftnl9Nh94wQtTQADnCE2GBaMMxS/ClHtJLDfmtnVC51Y4F7Ux9F3MSTSRBP
gNxcd9OikMKSj6RNq/PHw5+9R0h5v2lJXvalCQIDAQABAoIBAArCu/HCfTAi/WuT
4xWtumzlcYBCFqNY/0ENZWb+a68a8+kNb9sl53Xys95dOm8oYdiWRqEgzHbPKjB6
1EmrMkt1japdRwQ02R4rm0y1eQy7h61IoJ/L01AQDuY3vZReln5dciNNmlKKITAD
fB+zrHLuDRaaq1tIkQYH8+ElxkWAkd/vRQC4FP1OMIGnX4TdQ8lcG2DxwMs5jqJ6
ufTeR6QMDEymNYQwcdFhe5wNi57IEbN9B+N95yaktWsYV34HuYV2ndZtrhMLFhcq
Psw3vgrXBrreVPZ/iX1zeWgrjJb1AVOCtsOZ+O4YfZIIBWnhjj9sJnDCpMWmioH5
a0UmF0ECgYEA+NyIS5MmbrVJKVqWUJubSbaZXQEW3Jv4njRFAyG7NVapSbllF5t2
lq5usUI+l1XaZ3v6IpYPG+K+U1Ggo3+E6RFEDwVrZF0NYLOPXBydhkFFB4nHpTSX
uBo65/SiMDSassrqs/PFCDdsiUQ87sMFp+gouDePcBDC1OyHRDxR220CgYEAwLv6
zvqi5AvzO+tZQHRHcQdvCNC436KsUZlV6jQ5j3tUlqXlLRl8bWfih7Hu2uBGNjy2
Fao517Nd/kBdjVaechn/fvmLwgflQso0q1j63u2nZ32uYTd+zLnW1yJM4UCs/Hqb
hebRYDeZuRfobp62lEl6rdGij5LLRzQClOArso0CgYAaHClApKO3odWXPSXgNzNH
vJzCoUagxsyC7MEA3x0hL4J7dbQhkfITRSHf/y9J+Xv8t4k677uOFXAalcng3ZQ4
T9NwMAVgdlLc/nngFDCC0X5ImDAWKTpx2m6rv4L0w9AnShrt3nmhrw74J+yssFF7
mGQNT+cAvwFyDY7zndCI0QKBgEkZw0on5ApsweezHxoEQGiNcj68s7IWyBb2+pAn
GMHj/DRbXa4aYYg5g8EF6ttXfynpIwLamq/GV1ss3I7UEKqkU7S8P5brWbhYa1um
FxjguMLW94HmA5Dw15ynZNN2rWXhtwU1g6pjzElY2Q7D4eoiaIZu4aJlAfbSsjv3
PnutAoGBAMBRX8BbFODtQr68c6LWWda5zQ+kNgeCv+2ejG6rsEQ+Lxwi86Oc6udG
kTP4xuZo80MEW/t+kibFgU6gm1WTVltpbjo0XTaHE1OV4JeNC8edYFTi1DVO5r1M
ch+pkN20FQmZ+cLLn6nOeTJ6/9KXWKAZMPZ4SH4BnmF7iEa7yc8f
-----END RSA PRIVATE KEY-----

1
ssh/id_rsa.pub Normal file
View File

@ -0,0 +1 @@
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7XCMgz8aA9tss+jIp834jW5J6qsXg9hxuxNwdpDqYQX9R+p7q5lUkVHWZPUQJ/EJyRQwzppp/kRI0B/ESMT0euz9h3dDDmHBKUbccni5mDv2bXKCJ9d+eSnisY0jqypnGDIxt5BbcqgNpDrM57t78oiTFfhfNHwjh3mVHHih8hvzToAFAmQOurjx0csZmc4E++k0iamDm7oRYcZrgcjgOXal3IRBdNc4Su6lqU4LUKvyjHC5+KZLF+2eX02H3jBC1NAAOcITYYFowzFL8KUe0ksN+a2dULnVjgXtTH0XcxJNJEE+A3Fx306KQwpKPpE2r88fDn71HSHm/aUle9qUJ openstack-pns

475
sshutils.py Normal file
View File

@ -0,0 +1,475 @@
# Copyright 2013: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""High level ssh library.
Usage examples:
Execute command and get output:
ssh = sshclient.SSH('root', 'example.com', port=33)
status, stdout, stderr = ssh.execute('ps ax')
if status:
raise Exception('Command failed with non-zero status.')
print stdout.splitlines()
Execute command with huge output:
class PseudoFile(object):
def write(chunk):
if 'error' in chunk:
email_admin(chunk)
ssh = sshclient.SSH('root', 'example.com')
ssh.run('tail -f /var/log/syslog', stdout=PseudoFile(), timeout=False)
Execute local script on remote side:
ssh = sshclient.SSH('user', 'example.com')
status, out, err = ssh.execute('/bin/sh -s arg1 arg2',
stdin=open('~/myscript.sh', 'r'))
Upload file:
ssh = sshclient.SSH('user', 'example.com')
ssh.run('cat > ~/upload/file.gz', stdin=open('/store/file.gz', 'rb'))
Eventlet:
eventlet.monkey_patch(select=True, time=True)
or
eventlet.monkey_patch()
or
sshclient = eventlet.import_patched("opentstack.common.sshclient")
"""
import re
import select
import socket
import StringIO
import time
import paramiko
import scp
# from rally.openstack.common.gettextutils import _
class SSHError(Exception):
pass
class SSHTimeout(SSHError):
pass
class SSH(object):
"""Represent ssh connection."""
def __init__(self, user, host, port=22, pkey=None,
key_filename=None, password=None,
connect_timeout=60,
connect_retry_count=30,
connect_retry_wait_sec=2):
"""Initialize SSH client.
:param user: ssh username
:param host: hostname or ip address of remote ssh server
:param port: remote ssh port
:param pkey: RSA or DSS private key string or file object
:param key_filename: private key filename
:param password: password
:param connect_timeout: timeout when connecting ssh
:param connect_retry_count: how many times to retry connecting
:param connect_retry_wait_sec: seconds to wait between retries
"""
self.user = user
self.host = host
self.port = port
self.pkey = self._get_pkey(pkey) if pkey else None
self.password = password
self.key_filename = key_filename
self._client = False
self.connect_timeout = connect_timeout
self.connect_retry_count = connect_retry_count
self.connect_retry_wait_sec = connect_retry_wait_sec
self.distro_id = None
self.distro_id_like = None
self.distro_version = None
self.__get_distro()
def _get_pkey(self, key):
if isinstance(key, basestring):
key = StringIO.StringIO(key)
errors = []
for key_class in (paramiko.rsakey.RSAKey, paramiko.dsskey.DSSKey):
try:
return key_class.from_private_key(key)
except paramiko.SSHException as exc:
errors.append(exc)
raise SSHError('Invalid pkey: %s' % (errors))
def _get_client(self):
if self._client:
return self._client
self._client = paramiko.SSHClient()
self._client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
for _ in range(self.connect_retry_count):
try:
self._client.connect(self.host, username=self.user,
port=self.port, pkey=self.pkey,
key_filename=self.key_filename,
password=self.password,
timeout=self.connect_timeout)
return self._client
except (paramiko.AuthenticationException,
paramiko.BadHostKeyException,
paramiko.SSHException,
socket.error):
time.sleep(self.connect_retry_wait_sec)
self._client = None
msg = '[%s] SSH Connection failed after %s attempts' % (self.host,
self.connect_retry_count)
raise SSHError(msg)
def close(self):
self._client.close()
self._client = False
def run(self, cmd, stdin=None, stdout=None, stderr=None,
raise_on_error=True, timeout=3600):
"""Execute specified command on the server.
:param cmd: Command to be executed.
:param stdin: Open file or string to pass to stdin.
:param stdout: Open file to connect to stdout.
:param stderr: Open file to connect to stderr.
:param raise_on_error: If False then exit code will be return. If True
then exception will be raized if non-zero code.
:param timeout: Timeout in seconds for command execution.
Default 1 hour. No timeout if set to 0.
"""
client = self._get_client()
if isinstance(stdin, basestring):
stdin = StringIO.StringIO(stdin)
return self._run(client, cmd, stdin=stdin, stdout=stdout,
stderr=stderr, raise_on_error=raise_on_error,
timeout=timeout)
def _run(self, client, cmd, stdin=None, stdout=None, stderr=None,
raise_on_error=True, timeout=3600):
transport = client.get_transport()
session = transport.open_session()
session.exec_command(cmd)
start_time = time.time()
data_to_send = ''
stderr_data = None
# If we have data to be sent to stdin then `select' should also
# check for stdin availability.
if stdin and not stdin.closed:
writes = [session]
else:
writes = []
while True:
# Block until data can be read/write.
select.select([session], writes, [session], 1)
if session.recv_ready():
data = session.recv(4096)
if stdout is not None:
stdout.write(data)
continue
if session.recv_stderr_ready():
stderr_data = session.recv_stderr(4096)
if stderr is not None:
stderr.write(stderr_data)
continue
if session.send_ready():
if stdin is not None and not stdin.closed:
if not data_to_send:
data_to_send = stdin.read(4096)
if not data_to_send:
stdin.close()
session.shutdown_write()
writes = []
continue
sent_bytes = session.send(data_to_send)
data_to_send = data_to_send[sent_bytes:]
if session.exit_status_ready():
break
if timeout and (time.time() - timeout) > start_time:
args = {'cmd': cmd, 'host': self.host}
raise SSHTimeout(('Timeout executing command '
'"%(cmd)s" on host %(host)s') % args)
# if e:
# raise SSHError('Socket error.')
exit_status = session.recv_exit_status()
if 0 != exit_status and raise_on_error:
fmt = ('Command "%(cmd)s" failed with exit_status %(status)d.')
details = fmt % {'cmd': cmd, 'status': exit_status}
if stderr_data:
details += (' Last stderr data: "%s".') % stderr_data
raise SSHError(details)
return exit_status
def execute(self, cmd, stdin=None, timeout=3600):
"""Execute the specified command on the server.
:param cmd: Command to be executed.
:param stdin: Open file to be sent on process stdin.
:param timeout: Timeout for execution of the command.
Return tuple (exit_status, stdout, stderr)
"""
stdout = StringIO.StringIO()
stderr = StringIO.StringIO()
exit_status = self.run(cmd, stderr=stderr,
stdout=stdout, stdin=stdin,
timeout=timeout, raise_on_error=False)
stdout.seek(0)
stderr.seek(0)
return (exit_status, stdout.read(), stderr.read())
def wait(self, timeout=120, interval=1):
"""Wait for the host will be available via ssh."""
start_time = time.time()
while True:
try:
return self.execute('uname')
except (socket.error, SSHError):
time.sleep(interval)
if time.time() > (start_time + timeout):
raise SSHTimeout(('Timeout waiting for "%s"') % self.host)
def __extract_property(self, name, input_str):
expr = name + r'="?([\w\.]*)"?'
match = re.search(expr, input_str)
if match:
return match.group(1)
return 'Unknown'
# Get the linux distro
def __get_distro(self):
'''cat /etc/*-release | grep ID
Ubuntu:
DISTRIB_ID=Ubuntu
ID=ubuntu
ID_LIKE=debian
VERSION_ID="14.04"
RHEL:
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="7.0"
'''
distro_cmd = "grep ID /etc/*-release"
(status, distro_out, _) = self.execute(distro_cmd)
if status:
distro_out = ''
self.distro_id = self.__extract_property('ID', distro_out)
self.distro_id_like = self.__extract_property('ID_LIKE', distro_out)
self.distro_version = self.__extract_property('VERSION_ID', distro_out)
def pidof(self, proc_name):
'''
Return a list containing the pids of all processes of a given name
the list is empty if there is no pid
'''
# the path update is necessary for RHEL
cmd = "PATH=$PATH:/usr/sbin pidof " + proc_name
(status, cmd_output, _) = self.execute(cmd)
if status:
return []
cmd_output = cmd_output.strip()
result = cmd_output.split()
return result
# kill pids in the given list of pids
def kill_proc(self, pid_list):
cmd = "kill -9 " + ' '.join(pid_list)
self.execute(cmd)
# check stats for a given path
def stat(self, path):
(status, cmd_output, _) = self.execute('stat ' + path)
if status:
return None
return cmd_output
def ping_check(self, target_ip, ping_count=2, pass_threshold=80):
'''helper function to ping from one host to an IP address,
for a given count and pass_threshold;
Steps:
ssh to the host and then ping to the target IP
then match the output and verify that the loss% is
less than the pass_threshold%
Return 1 if the criteria passes
Return 0, if it fails
'''
cmd = "ping -c " + str(ping_count) + " " + str(target_ip)
(_, cmd_output, _) = self.execute(cmd)
match = re.search(r'(\d*)% packet loss', cmd_output)
pkt_loss = match.group(1)
if int(pkt_loss) < int(pass_threshold):
return 1
else:
print 'Ping to %s failed: %s' % (target_ip, cmd_output)
return 0
def get_file_from_host(self, from_path, to_path):
'''
A wrapper api on top of paramiko scp module, to scp
a local file to the host.
'''
sshcon = self._get_client()
scpcon = scp.SCPClient(sshcon.get_transport())
try:
scpcon.get(from_path, to_path)
except scp.SCPException as exp:
print ("Send failed: [%s]", exp)
return 0
return 1
def read_remote_file(self, from_path):
'''
Read a remote file and save it to a buffer.
'''
cmd = "cat " + from_path
(status, cmd_output, _) = self.execute(cmd)
if status:
return None
return cmd_output
def get_host_os_version(self):
'''
Identify the host distribution/relase.
'''
os_release_file = "/etc/os-release"
sys_release_file = "/etc/system-release"
name = ""
version = ""
if self.stat(os_release_file):
data = self.read_remote_file(os_release_file)
if data is None:
print "ERROR:Failed to read file %s" % os_release_file
return None
for line in data.splitlines():
mobj = re.match(r'PRETTY_NAME=(.*)', line)
if mobj:
name = mobj.group(1).strip("\"")
mobj = re.match(r'VERSION.*=(.*)', line)
if mobj:
version = mobj.group(1).strip("\"")
os_name = name + " " + version
return os_name
if self.stat(sys_release_file):
data = self.read_remote_file(sys_release_file)
if data is None:
print "ERROR:Failed to read file %s" % sys_release_file
return None
for line in data.splitlines():
mobj = re.match(r'Red Hat.*', line)
if mobj:
return mobj.group(0)
return None
def check_rpm_package_installed(self, rpm_pkg):
'''
Given a host and a package name, check if it is installed on the
system.
'''
check_pkg_cmd = "rpm -qa | grep " + rpm_pkg
(status, cmd_output, _) = self.execute(check_pkg_cmd)
if status:
return None
pkg_pattern = ".*" + rpm_pkg + ".*"
rpm_pattern = re.compile(pkg_pattern, re.IGNORECASE)
for line in cmd_output.splitlines():
mobj = rpm_pattern.match(line)
if mobj:
return mobj.group(0)
print "%s pkg installed " % rpm_pkg
return None
def check_openstack_version(self):
'''
Identify the openstack version running on the controller.
'''
version_file = "/tmp/version.txt"
nova_cmd = "nova --version >> " + version_file
(status, _, err_output) = self.execute(nova_cmd)
if status:
return None
if err_output.strip() == "2.17.0":
return "icehouse"
else:
return "juno"
##################################################
# Only invoke the module directly for test purposes. Should be
# invoked from pns script.
##################################################
def main():
# ssh = SSH('localadmin', '172.29.87.29', key_filename='./ssh/id_rsa')
ssh = SSH('localadmin', '172.22.191.173', key_filename='./ssh/id_rsa')
print 'ID=' + ssh.distro_id
print 'ID_LIKE=' + ssh.distro_id_like
print 'VERSION_ID=' + ssh.distro_version
ssh.wait()
print ssh.pidof('bash')
print ssh.stat('/tmp')
if __name__ == "__main__":
main()

15
test-requirements.txt Normal file
View File

@ -0,0 +1,15 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
hacking>=0.9.2,<0.10
coverage>=3.6
discover
python-subunit>=0.0.18
sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3
oslosphinx>=2.2.0 # Apache-2.0
oslotest>=1.2.0 # Apache-2.0
testrepository>=0.0.18
testscenarios>=0.4
testtools>=0.9.36,!=1.2.0

BIN
tools/iperf Executable file

Binary file not shown.

BIN
tools/nuttcp-7.3.2 Executable file

Binary file not shown.

42
tox.ini Normal file
View File

@ -0,0 +1,42 @@
[tox]
minversion = 1.6
envlist = py33,py34,py26,py27,pypy,pep8
skipsdist = True
[testenv]
usedevelop = True
install_command = pip install -U {opts} {packages}
setenv =
VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
commands = python setup.py testr --slowest --testr-args='{posargs}'
[testenv:pep8]
commands = flake8
[testenv:venv]
commands = {posargs}
[testenv:cover]
commands = python setup.py testr --coverage --testr-args='{posargs}'
[testenv:docs]
commands = python setup.py build_sphinx
[flake8]
# H803 skipped on purpose per list discussion.
# E123, E125 skipped as they are invalid PEP-8.
max-line-length = 100
show-source = True
#E302: expected 2 blank linee
#E303: too many blank lines (2)
#H233: Python 3.x incompatible use of print operator
#H236: Python 3.x incompatible __metaclass__, use six.add_metaclass()
#H302: import only modules.
#H404: multi line docstring should start without a leading new line
#H405: multi line docstring summary not separated with an empty line
#H904: Wrap long lines in parentheses instead of a backslash
ignore = E123,E125,H803,E302,E303,H233,H236,H302,H404,H405,H904
builtins = _
exclude=.venv,.git,.tox,dist,doc,*openstack/common*,*lib/python*,*egg,build

768
vmtp.py Executable file
View File

@ -0,0 +1,768 @@
# Copyright 2014 Cisco Systems, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import argparse
import ast
import datetime
import json
import os
import pprint
import re
import socket
import stat
import sys
import traceback
import compute
import credentials
import iperf_tool
import network
import nuttcp_tool
import pns_mongo
import sshutils
import configure
from neutronclient.v2_0 import client as neutronclient
from novaclient.client import Client
__version__ = '2.0.0'
from perf_instance import PerfInstance as PerfInstance
# Global external host info
ext_host_list = []
# Check IPv4 address syntax - not completely fool proof but will catch
# some invalid formats
def is_ipv4(address):
try:
socket.inet_aton(address)
except socket.error:
return False
return True
def get_absolute_path_for_file(file_name):
'''
Return the filename in absolute path for any file
passed as relateive path.
'''
if os.path.isabs(__file__):
abs_file_path = os.path.join(__file__.split("vmtp.py")[0],
file_name)
else:
abs_file = os.path.abspath(__file__)
abs_file_path = os.path.join(abs_file.split("vmtp.py")[0],
file_name)
return abs_file_path
def normalize_paths(cfg):
'''
Normalize the various paths to config files, tools, ssh priv and pub key
files.
'''
cfg.public_key_file = get_absolute_path_for_file(cfg.public_key_file)
cfg.private_key_file = get_absolute_path_for_file(cfg.private_key_file)
cfg.perf_tool_path = get_absolute_path_for_file(cfg.perf_tool_path)
class FlowPrinter(object):
def __init__(self):
self.flow_num = 0
def print_desc(self, desc):
self.flow_num += 1
print "=" * 60
print('Flow %d: %s' % (self.flow_num, desc))
class ResultsCollector(object):
def __init__(self):
self.results = {'flows': []}
self.results['date'] = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
self.results['args'] = ' '.join(sys.argv)
self.results['version'] = __version__
self.ppr = pprint.PrettyPrinter(indent=4, width=100)
def add_property(self, name, value):
self.results[name] = value
def add_flow_result(self, flow_res):
self.results['flows'].append(flow_res)
self.ppr.pprint(flow_res)
def display(self):
self.ppr.pprint(self.results)
def pprint(self, res):
self.ppr.pprint(res)
def save(self, filename):
'''Save results in json format file.'''
print('Saving results in json file: ' + filename)
with open(filename, 'w') as jfp:
json.dump(self.results, jfp, indent=4, sort_keys=True)
def save_to_db(self, cfg):
'''Save resutls to MongoDB database.'''
print "Saving results to MongoDB database."
sshcon = sshutils.SSH(cfg.access_username,
cfg.access_host,
password=cfg.access_password)
if sshcon is not None:
self.results['distro'] = sshcon.get_host_os_version()
self.results['openstack_version'] = sshcon.check_openstack_version()
post_id = pns_mongo.\
pns_add_test_result_to_mongod(cfg.pns_mongod_ip,
cfg.pns_mongod_port,
cfg.pns_db,
cfg.pns_collection,
self.results)
if post_id is None:
print "ERROR: Failed to add result to DB"
class VmtpException(Exception):
pass
class VmtpTest(object):
def __init__(self):
'''
1. Authenticate nova and neutron with keystone
2. Create new client objects for neutron and nova
3. Find external network
4. Find or create router for external network
5. Find or create internal mgmt and data networks
6. Add internal mgmt network to router
7. Import public key for ssh
8. Create 2 VM instances on internal networks
9. Create floating ips for VMs
10. Associate floating ip with VMs
'''
self.server = None
self.client = None
self.net = None
self.comp = None
self.ping_status = None
self.client_az_list = None
self.sec_group = None
self.image_instance = None
self.flavor_type = None
# Create an instance on a particular availability zone
def create_instance(self, inst, az, int_net):
nics = [{'net-id': int_net['id']}]
self.assert_true(inst.create(self.image_instance,
self.flavor_type,
config.public_key_name,
nics,
az,
int_net['name'],
self.sec_group))
def assert_true(self, cond):
if not cond:
raise VmtpException('Assert failure')
def setup(self):
# If we need to reuse existing vms just return without setup
if not config.reuse_existing_vm:
creds = cred.get_credentials()
creds_nova = cred.get_nova_credentials_v2()
# Create the nova and neutron instances
nova_client = Client(**creds_nova)
neutron = neutronclient.Client(**creds)
self.comp = compute.Compute(nova_client, config)
# Add the script public key to openstack
self.comp.add_public_key(config.public_key_name,
config.public_key_file)
self.image_instance = self.comp.find_image(config.image_name)
if self.image_instance is None:
"""
# Try to upload the image
print '%s: image not found, will try to upload it' % (config.image_name)
self.comp.copy_and_upload_image(config.image_name, config.server_ip_for_image,
config.image_path_in_server)
time.sleep(10)
self.image_instance = self.comp.find_image(config.image_name)
"""
# Exit the pogram
print '%s: image not found.' % (config.image_name)
sys.exit(1)
self.assert_true(self.image_instance)
print 'Found image: %s' % (config.image_name)
self.flavor_type = self.comp.find_flavor(config.flavor_type)
self.net = network.Network(neutron, config)
# Create a new security group for the test
self.sec_group = self.comp.security_group_create()
if not self.sec_group:
raise VmtpException("Security group creation failed")
if config.reuse_existing_vm:
self.server.internal_ip = config.vm_server_internal_ip
self.client.internal_ip = config.vm_client_internal_ip
if config.vm_server_external_ip:
self.server.ssh_ip = config.vm_server_external_ip
else:
self.server.ssh_ip = config.vm_server_internal_ip
if config.vm_client_external_ip:
self.client.ssh_ip = config.vm_client_external_ip
else:
self.client.ssh_ip = config.vm_client_internal_ip
return
# this is the standard way of running the test
# NICs to be used for the VM
if config.reuse_network_name:
# VM needs to connect to existing management and new data network
# Reset the management network name
config.internal_network_name[0] = config.reuse_network_name
else:
# Make sure we have an external network and an external router
self.assert_true(self.net.ext_net)
self.assert_true(self.net.ext_router)
self.assert_true(self.net.vm_int_net)
# Get hosts for the availability zone to use
avail_list = self.comp.list_hypervisor(config.availability_zone)
# compute the list of client vm placements to run
if avail_list:
server_az = config.availability_zone + ":" + avail_list[0]
if len(avail_list) > 1:
# can do intra + inter
if config.inter_node_only:
# inter-node only
self.client_az_list = [config.availability_zone +
":" + avail_list[1]]
else:
self.client_az_list = [server_az, config.availability_zone +
":" + avail_list[1]]
else:
# can only do intra
self.client_az_list = [server_az]
else:
# cannot get the list of hosts
# can do intra or inter (cannot know)
server_az = config.availability_zone
self.client_az_list = [server_az]
self.server = PerfInstance(config.vm_name_server,
config,
self.comp,
self.net,
server=True)
self.server.display('Creating server VM...')
self.create_instance(self.server, server_az,
self.net.vm_int_net[0])
# Test throughput for the case of the external host
def ext_host_tp_test(self):
client = PerfInstance('Host-' + ext_host_list[1] + '-Client', config)
if not client.setup_ssh(ext_host_list[1], ext_host_list[0]):
client.display('SSH failed, check IP or make sure public key is configured')
else:
client.buginf('SSH connected')
client.create()
fpr.print_desc('External-VM (upload/download)')
res = client.run_client('External-VM',
self.server.ssh_ip,
self.server,
bandwidth=config.vm_bandwidth,
bidirectional=True)
if res:
rescol.add_flow_result(res)
client.dispose()
def add_location(self, label):
'''Add a note to a label to specify same node or differemt node.'''
# We can only tell if there is a host part in the az
# e.g. 'nova:GG34-7'
if ':' in self.client.az:
if self.client.az == self.server.az:
return label + ' (intra-node)'
else:
return label + ' (inter-node)'
return label
def create_flow_client(self, client_az, int_net):
self.client = PerfInstance(config.vm_name_client, config,
self.comp,
self.net)
self.create_instance(self.client, client_az, int_net)
def measure_flow(self, label, target_ip):
label = self.add_location(label)
fpr.print_desc(label)
# results for this flow as a dict
perf_output = self.client.run_client(label, target_ip,
self.server,
bandwidth=config.vm_bandwidth,
az_to=self.server.az)
if opts.stop_on_error:
# check if there is any error in the results
results_list = perf_output['results']
for res_dict in results_list:
if 'error' in res_dict:
print('Stopping execution on error, cleanup all VMs/networks manually')
rescol.pprint(perf_output)
sys.exit(2)
rescol.add_flow_result(perf_output)
def measure_vm_flows(self):
network_type = 'Unknown'
try:
network_type = self.net.vm_int_net[0]['provider:network_type']
print "OpenStack network type: " + network_type
rescol.add_property('encapsulation', network_type)
except KeyError as exp:
network_type = 'Unknown'
print "Provider network type not found: ", str(exp)
# scenarios need to be tested for both inter and intra node
# 1. VM to VM on same data network
# 2. VM to VM on seperate networks fixed-fixed
# 3. VM to VM on seperate networks floating-floating
# we should have 1 or 2 AZ to use (intra and inter-node)
for client_az in self.client_az_list:
self.create_flow_client(client_az, self.net.vm_int_net[0])
self.measure_flow("VM to VM same network fixed IP",
self.server.internal_ip)
self.client.dispose()
self.client = None
if not config.reuse_network_name:
# Different network
self.create_flow_client(client_az, self.net.vm_int_net[1])
self.measure_flow("VM to VM different network fixed IP",
self.server.internal_ip)
self.measure_flow("VM to VM different network floating IP",
self.server.ssh_ip)
self.client.dispose()
self.client = None
# If external network is specified run that case
if ext_host_list:
self.ext_host_tp_test()
def teardown(self):
'''
Clean up the floating ip and VMs
'''
print '---- Cleanup ----'
if self.server:
self.server.dispose()
if self.client:
self.client.dispose()
if not config.reuse_existing_vm and self.net:
self.net.dispose()
# Remove the public key
if self.comp:
self.comp.remove_public_key(config.public_key_name)
# Finally remove the security group
self.comp.security_group_delete(self.sec_group)
def run(self):
error_flag = False
try:
self.setup()
self.measure_vm_flows()
except KeyboardInterrupt:
traceback.format_exc()
except VmtpException:
traceback.format_exc()
error_flag = True
except sshutils.SSHError:
traceback.format_exc()
error_flag = True
if opts.stop_on_error and error_flag:
print('Stopping execution on error, cleanup all VMs/networks manually')
sys.exit(2)
else:
self.teardown()
def test_native_tp(nhosts):
fpr.print_desc('Native Host to Host throughput')
server_host = nhosts[0]
server = PerfInstance('Host-' + server_host[1] + '-Server', config, server=True)
if not server.setup_ssh(server_host[1], server_host[0]):
server.display('SSH failed, check IP or make sure public key is configured')
else:
server.display('SSH connected')
server.create()
# if inter-node-only requested we avoid running the client on the
# same node as the server - but only if there is at least another
# IP provided
if config.inter_node_only and len(nhosts) > 1:
# remove the first element of the list
nhosts.pop(0)
# IP address clients should connect to, check if the user
# has passed a server listen interface name
if len(server_host) == 3:
# use the IP address configured on given interface
server_ip = server.get_interface_ip(server_host[2])
if not server_ip:
print('Error: cannot get IP address for interface ' + server_host[2])
else:
server.display('Clients will use server IP address %s (%s)' %
(server_ip, server_host[2]))
else:
# use same as ssh IP
server_ip = server_host[1]
if server_ip:
# start client side, 1 per host provided
for client_host in nhosts:
client = PerfInstance('Host-' + client_host[1] + '-Client', config)
if not client.setup_ssh(client_host[1], client_host[0]):
client.display('SSH failed, check IP or make sure public key is configured')
else:
client.buginf('SSH connected')
client.create()
res = client.run_client('Native host-host',
server_ip,
server,
bandwidth=config.vm_bandwidth)
rescol.add_flow_result(res)
client.dispose()
server.dispose()
if __name__ == '__main__':
fpr = FlowPrinter()
rescol = ResultsCollector()
parser = argparse.ArgumentParser(description='OpenStack VM Throughput V' + __version__)
parser.add_argument('-c', '--config', dest='config',
action='store',
help='override default values with a config file',
metavar='<config_file>')
parser.add_argument('-r', '--rc', dest='rc',
action='store',
help='source OpenStack credentials from rc file',
metavar='<openrc_file>')
parser.add_argument('-m', '--monitor', dest='monitor',
action='store',
help='Enable CPU monitoring (requires Ganglia)',
metavar='<gmond_ip>[:<port>]')
parser.add_argument('-p', '--password', dest='pwd',
action='store',
help='OpenStack password',
metavar='<password>')
parser.add_argument('-t', '--time', dest='time',
action='store',
help='throughput test duration in seconds (default 10 sec)',
metavar='<time>')
parser.add_argument('--host', dest='hosts',
action='append',
help='native host throughput (targets requires ssh key)',
metavar='<user>@<host_ssh_ip>[:<server-listen-if-name>]')
parser.add_argument('--external-host', dest='ext_host',
action='store',
help='external-VM throughput (target requires ssh key)',
metavar='<user>@<ext_host_ssh_ip>')
parser.add_argument('--access_info', dest='access_info',
action='store',
help='access info for control host',
metavar='{host:<hostip>, user:<user>, password:<pass>}')
parser.add_argument('--mongod_server', dest='mongod_server',
action='store',
help='provide mongoDB server IP to store results',
metavar='<server ip>')
parser.add_argument('--json', dest='json',
action='store',
help='store results in json format file',
metavar='<file>')
parser.add_argument('--tp-tool', dest='tp_tool',
action='store',
default='nuttcp',
help='transport perf tool to use (default=nuttcp)',
metavar='nuttcp|iperf')
parser.add_argument('--hypervisor', dest='hypervisors',
action='append',
help='hypervisor to use in the avail zone (1 per arg, up to 2 args)',
metavar='name')
parser.add_argument('--inter-node-only', dest='inter_node_only',
default=False,
action='store_true',
help='only measure inter-node')
parser.add_argument('--protocols', dest='protocols',
action='store',
default='TUI',
help='protocols T(TCP), U(UDP), I(ICMP) - default=TUI (all)',
metavar='T|U|I')
parser.add_argument('--bandwidth', dest='vm_bandwidth',
action='store',
default=0,
help='the bandwidth limit for TCP/UDP flows in K/M/Gbps, '
'e.g. 128K/32M/5G. (default=no limit) ',
metavar='<bandwidth>')
parser.add_argument('--tcpbuf', dest='tcp_pkt_sizes',
action='store',
default=0,
help='list of buffer length when transmitting over TCP in Bytes, '
'e.g. --tcpbuf 8192,65536. (default=65536)',
metavar='<tcp_pkt_size1,...>')
parser.add_argument('--udpbuf', dest='udp_pkt_sizes',
action='store',
default=0,
help='list of buffer length when transmitting over UDP in Bytes, '
'e.g. --udpbuf 128,2048. (default=128,1024,8192)',
metavar='<udp_pkt_size1,...>')
parser.add_argument('--no-env', dest='no_env',
default=False,
action='store_true',
help='do not read env variables')
parser.add_argument('-d', '--debug', dest='debug',
default=False,
action='store_true',
help='debug flag (very verbose)')
parser.add_argument('-v', '--version', dest='version',
default=False,
action='store_true',
help='print version of this script and exit')
parser.add_argument('--stop-on-error', dest='stop_on_error',
default=False,
action='store_true',
help='Stop and keep everything as-is on error (must cleanup manually)')
(opts, args) = parser.parse_known_args()
default_cfg_file = get_absolute_path_for_file("cfg.default.yaml")
# read the default configuration file and possibly an override config file
config = configure.Configuration.from_file(default_cfg_file).configure()
if opts.config:
alt_config = configure.Configuration.from_file(opts.config).configure()
config = config.merge(alt_config)
if opts.version:
print('Version ' + __version__)
sys.exit(0)
# debug flag
config.debug = opts.debug
config.inter_node_only = opts.inter_node_only
config.hypervisors = opts.hypervisors
# time to run each perf test in seconds
if opts.time:
config.time = int(opts.time)
else:
config.time = 10
if opts.json:
config.json_file = opts.json
else:
config.json_file = None
###################################################
# Access info for the server to collect metadata for
# the run.
###################################################
if opts.access_info:
access_info = ast.literal_eval(opts.access_info)
config.access_host = access_info['host']
config.access_username = access_info['user']
config.access_password = access_info['password']
else:
config.access_host = None
config.access_username = None
config.access_password = None
###################################################
# MongoDB Server connection info.
###################################################
if opts.mongod_server:
config.pns_mongod_ip = opts.mongod_server
else:
config.pns_mongod_ip = None
if 'pns_mongod_port' not in config:
# Set MongoDB default port if not set.
config.pns_mongod_port = 27017
# the bandwidth limit for VMs
if opts.vm_bandwidth:
opts.vm_bandwidth = opts.vm_bandwidth.upper().strip()
ex_unit = 'KMG'.find(opts.vm_bandwidth[-1])
try:
if ex_unit == -1:
raise ValueError
val = int(opts.vm_bandwidth[0:-1])
except ValueError:
print 'Invalid --bandwidth parameter. A valid input must '\
'specify only one unit (K|M|G).'
sys.exit(1)
config.vm_bandwidth = int(val * (10 ** (ex_unit * 3)))
# the pkt size for TCP and UDP
if opts.tcp_pkt_sizes:
try:
config.tcp_pkt_sizes = opts.tcp_pkt_sizes.split(',')
for i in xrange(len(config.tcp_pkt_sizes)):
config.tcp_pkt_sizes[i] = int(config.tcp_pkt_sizes[i])
except ValueError:
print 'Invalid --tcpbuf parameter. A valid input must be '\
'integers seperated by comma.'
sys.exit(1)
if opts.udp_pkt_sizes:
try:
config.udp_pkt_sizes = opts.udp_pkt_sizes.split(',')
for i in xrange(len(config.udp_pkt_sizes)):
config.udp_pkt_sizes[i] = int(config.udp_pkt_sizes[i])
except ValueError:
print 'Invalid --udpbuf parameter. A valid input must be '\
'integers seperated by comma.'
sys.exit(1)
#####################################################
# Set Ganglia server ip and port if the monitoring (-m)
# option is enabled.
#####################################################
config.gmond_svr_ip = None
config.gmond_svr_port = None
if opts.monitor:
# Add the default gmond port if not present
if ':' not in opts.monitor:
opts.monitor += ':8649'
mobj = re.match(r'(\d+\.\d+\.\d+\.\d+):(\d+)', opts.monitor)
if mobj:
config.gmond_svr_ip = mobj.group(1)
config.gmond_svr_port = mobj.group(2)
print "Ganglia monitoring enabled (%s:%s)" % \
(config.gmond_svr_ip, config.gmond_svr_port)
config.time = 30
else:
print 'Invalid --monitor syntax: ' + opts.monitor
###################################################
# Once we parse the config files, normalize
# the paths so that all paths are absolute paths.
###################################################
normalize_paths(config)
# first chmod the local private key since git does not keep the permission
# as this is required by ssh/scp
os.chmod(config.private_key_file, stat.S_IRUSR | stat.S_IWUSR)
# Check the tp-tool name
config.protocols = opts.protocols.upper()
if 'T' in config.protocols or 'U' in config.protocols:
if opts.tp_tool.lower() == 'nuttcp':
config.tp_tool = nuttcp_tool.NuttcpTool
elif opts.tp_tool.lower() == 'iperf':
config.tp_tool = iperf_tool.IperfTool
else:
print 'Invalid transport tool: ' + opts.tp_tool
sys.exit(1)
else:
config.tp_tool = None
# 3 forms are accepted:
# --host 1.1.1.1
# --host root@1.1.1.1
# --host root@1.1.1.1:eth0
# A list of 0 to 2 lists where each nested list is
# a list of 1 to 3 elements. e.g.:
# [['ubuntu','1.1.1.1'],['root', 2.2.2.2]]
# [['ubuntu','1.1.1.1', 'eth0'],['root', 2.2.2.2]]
# when not provided the default user is 'root'
if opts.hosts:
native_hosts = []
for host in opts.hosts:
# split on '@' first
elem_list = host.split("@")
if len(elem_list) == 1:
elem_list.insert(0, 'root')
# split out the if name if present
# ['root':'1.1.1.1:eth0'] becomes ['root':'1.1.1.1', 'eth0']
if ':' in elem_list[1]:
elem_list.extend(elem_list.pop().split(':'))
if not is_ipv4(elem_list[1]):
print 'Invalid IPv4 address ' + elem_list[1]
sys.exit(1)
native_hosts.append(elem_list)
test_native_tp(native_hosts)
# Add the external host info to a list
# if username is not given assume root as user
if opts.ext_host:
elem_list = opts.ext_host.split("@")
if len(elem_list) == 1:
elem_list.insert(0, 'root')
if not is_ipv4(elem_list[1]):
print 'Invalid IPv4 address ' + elem_list[1]
sys.exit(1)
ext_host_list = elem_list[:]
cred = credentials.Credentials(opts.rc, opts.pwd, opts.no_env)
# replace all command line arguments (after the prog name) with
# those args that have not been parsed by this parser so that the
# unit test parser is not bothered by local arguments
sys.argv[1:] = args
if cred.rc_auth_url:
if opts.debug:
print 'Using ' + cred.rc_auth_url
rescol.add_property('auth_url', cred.rc_auth_url)
vmtp = VmtpTest()
vmtp.run()
if config.json_file:
rescol.save(config.json_file)
if config.pns_mongod_ip:
rescol.save_to_db(config)

19
vmtp/__init__.py Normal file
View File

@ -0,0 +1,19 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pbr.version
__version__ = pbr.version.VersionInfo(
'vmtp').version_string()

0
vmtp/tests/__init__.py Normal file
View File

23
vmtp/tests/base.py Normal file
View File

@ -0,0 +1,23 @@
# -*- coding: utf-8 -*-
# Copyright 2010-2011 OpenStack Foundation
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslotest import base
class TestCase(base.BaseTestCase):
"""Test case base class for all unit tests."""

28
vmtp/tests/test_vmtp.py Normal file
View File

@ -0,0 +1,28 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
test_vmtp
----------------------------------
Tests for `vmtp` module.
"""
from vmtp.tests import base
class TestVmtp(base.TestCase):
def test_something(self):
pass