Nested provider performance testing

This change duplicates the ideas started in with the placement-perfload
job and builds on it to create a set of nested trees that can be
exercised.

In placement-perfload, placeload is used to create the providers. This
proves to be cumbersome for nested topologies so this change starts
a new model: Using parallel [1] plus instrumented gabbi to create
nested topologies in a declarative fashion.

gate/perfload-server.sh sets up placement db and starts a uwsgi server.

gate/perfload-nested-loader.sh is called in the playbook to cause gabbi
to create the nested topology described in
gate/gabbits/nested-perfload.yaml. That topology is intentionally very
naive right now but should be made more realisitc as we continue to
develop nested features.

There's some duplication between perfload.yaml and
nested-perfload.yaml that will be cleared up in a followup.

[1] https://www.gnu.org/software/parallel/ (although the version on
ubuntu is a non-GPL clone)

Story: 2005443
Task: 30487
Change-Id: I617161fde5b844d7f52dc766f85c1b9f1b139e4a
This commit is contained in:
Chris Dent 2019-06-17 15:47:36 +01:00
parent 9de03e1b17
commit 8723bd7772
7 changed files with 271 additions and 4 deletions

View File

@ -15,6 +15,8 @@
- openstack-tox-functional - openstack-tox-functional
- openstack-tox-functional-py36 - openstack-tox-functional-py36
- placement-nova-tox-functional-py36 - placement-nova-tox-functional-py36
- placement-nested-perfload:
voting: false
- placement-perfload: - placement-perfload:
voting: false voting: false
- tempest-full-py3: - tempest-full-py3:
@ -74,3 +76,11 @@
- ^tox.ini$ - ^tox.ini$
run: playbooks/perfload.yaml run: playbooks/perfload.yaml
post-run: playbooks/post.yaml post-run: playbooks/post.yaml
- job:
name: placement-nested-perfload
parent: placement-perfload
description: |
A simple node on which to run placement with the barest of configs and
make nested performance related tests against it.
run: playbooks/nested-perfload.yaml

View File

@ -1,4 +1,14 @@
These are hooks to be used by the OpenStack infra test system. These scripts This directory contains files used by the OpenStack infra test system. They are
may be called by certain jobs at important times to do extra testing, setup, really only relevant within the scope of the OpenStack infra system and are not
etc. They are really only relevant within the scope of the OpenStack infra expected to be useful to anyone else.
system and are not expected to be useful to anyone else.
These files are a mixture of:
* Hooks and other scripts to be used by the OpenStack infra test system. These
scripts may be called by certain jobs at important times to do extra testing,
setup, run services, etc.
* "gabbits" are test files to be used with some of the jobs described in
.zuul.yaml and playbooks. When changes are made in the gabbits or playbooks
it is quite likely that queries in the playbooks or the assertions in the
gabbits will need to be updated.

View File

@ -0,0 +1,83 @@
# This is a single compute with two numa nodes, to show some nested.
#
# This should be updated to represent something closer to a real
# and expected nested topology. If changes are made here that impact
# the number of total resource providers, then $COUNT in
# playbooks/nested-perfload.yaml should be updated.
defaults:
request_headers:
accept: application/json
content-type: application/json
openstack-api-version: placement latest
x-auth-token: $ENVIRON['TOKEN']
tests:
- name: create one compute node
POST: /resource_providers
data:
uuid: $ENVIRON['CN_UUID']
name: $ENVIRON['CN_UUID']
- name: set compute node inventory
PUT: /resource_providers/$ENVIRON['CN_UUID']/inventories
data:
resource_provider_generation: 0
inventories:
DISK_GB:
total: 20480
- name: set compute node traits
PUT: /resource_providers/$ENVIRON['CN_UUID']/traits
data:
resource_provider_generation: 1
traits:
- COMPUTE_VOLUME_MULTI_ATTACH
- name: create numa 1
POST: /resource_providers
data:
uuid: $ENVIRON['N1_UUID']
name: numa 1-$ENVIRON['N1_UUID']
parent_provider_uuid: $ENVIRON['CN_UUID']
- name: set numa 1 inventory
PUT: /resource_providers/$ENVIRON['N1_UUID']/inventories
data:
resource_provider_generation: 0
inventories:
VCPU:
total: 16
MEMORY_MB:
total: 16777216
- name: set numa 1 traits
PUT: /resource_providers/$ENVIRON['N1_UUID']/traits
data:
resource_provider_generation: 1
traits:
- HW_CPU_X86_AVX2
- name: create numa 2
POST: /resource_providers
data:
uuid: $ENVIRON['N2_UUID']
name: numa 2-$ENVIRON['N2_UUID']
parent_provider_uuid: $ENVIRON['CN_UUID']
- name: set numa 2 inventory
PUT: /resource_providers/$ENVIRON['N2_UUID']/inventories
data:
resource_provider_generation: 0
inventories:
VCPU:
total: 16
MEMORY_MB:
total: 16777216
- name: set numa 2 traits
PUT: /resource_providers/$ENVIRON['N2_UUID']/traits
data:
resource_provider_generation: 1
traits:
- HW_CPU_X86_SSE

20
gate/perfload-nested-loader.sh Executable file
View File

@ -0,0 +1,20 @@
#!/bin/bash
set -a
HOST=$1
GABBIT=$2
# By default the placement server is set up with noauth2 authentication
# handling. If that is changed to keystone, a $TOKEN can be generated in
# the calling environment and used instead of the default 'admin'.
TOKEN=${TOKEN:-admin}
# These are the dynamic/unique values for individual resource providers
# that need to be set for each run a gabbi file. Values that are the same
# for all the resource providers (for example, traits and inventory) should
# be set in $GABBIT.
CN_UUID=$(uuidgen)
N1_UUID=$(uuidgen)
N2_UUID=$(uuidgen)
# Run gabbi silently.
gabbi-run -q $HOST -- $GABBIT

94
gate/perfload-nested-runner.sh Executable file
View File

@ -0,0 +1,94 @@
#!/bin/bash -x
WORK_DIR=$1
PLACEMENT_URL="http://localhost:8000"
LOG=placement-perf.txt
LOG_DEST=${WORK_DIR}/logs
# The gabbit used to create one nested provider tree. It takes
# inputs from LOADER to create a unique tree.
GABBIT=gate/gabbits/nested-perfload.yaml
LOADER=gate/perfload-nested-loader.sh
# The query to be used to get a list of allocation candidates. If
# $GABBIT is changed, this may need to change.
TRAIT="COMPUTE_VOLUME_MULTI_ATTACH"
TRAIT1="HW_CPU_X86_AVX2"
PLACEMENT_QUERY="resources=DISK_GB:10&resources1=VCPU:1,MEMORY_MB:256&required=${TRAIT}&required1=${TRAIT1}&group_policy=isolate"
# Number of nested trees to create.
ITERATIONS=1000
# Number of times to write allocations and then time again.
ALLOCATIONS_TO_WRITE=10
# The number of providers in each nested tree. This will need to
# need to change whenever the resource provider topology created in
# $GABBIT is changed.
PROVIDER_TOPOLOGY_COUNT=3
# Expected total number of providers, used to check that creation
# was a success.
TOTAL_PROVIDER_COUNT=$((ITERATIONS * PROVIDER_TOPOLOGY_COUNT))
trap "sudo cp -p $LOG $LOG_DEST" EXIT
function time_candidates {
(
echo "##### TIMING GET /allocation_candidates?${PLACEMENT_QUERY} twice"
time curl -s -H 'x-auth-token: admin' -H 'openstack-api-version: placement latest' "${PLACEMENT_URL}/allocation_candidates?${PLACEMENT_QUERY}" > /dev/null
time curl -s -H 'x-auth-token: admin' -H 'openstack-api-version: placement latest' "${PLACEMENT_URL}/allocation_candidates?${PLACEMENT_QUERY}" > /dev/null
) 2>&1 | tee -a $LOG
}
function write_allocation {
# Take the first allocation request and send it back as a well-formed allocation
curl -s -H 'x-auth-token: admin' -H 'openstack-api-version: placement latest' "${PLACEMENT_URL}/allocation_candidates?${PLACEMENT_QUERY}&limit=5" \
| jq --arg proj $(uuidgen) --arg user $(uuidgen) '.allocation_requests[0] + {consumer_generation: null, project_id: $proj, user_id: $user}' \
| curl -s -H 'x-auth-token: admin' -H 'content-type: application/json' -H 'openstack-api-version: placement latest' \
-X PUT -d @- "${PLACEMENT_URL}/allocations/$(uuidgen)"
}
function load_candidates {
time_candidates
for iter in $(seq 1 $ALLOCATIONS_TO_WRITE); do
echo "##### Writing allocation ${iter}" | tee -a $LOG
write_allocation
time_candidates
done
}
function check_placement {
local rp_count
local code
code=0
python -m virtualenv -p python3 .perfload
. .perfload/bin/activate
# install placeload
pip install gabbi
# Create $TOTAL_PROVIDER_COUNT nested resource provider trees,
# each tree having $PROVIDER_TOPOLOGY_COUNT resource providers.
# LOADER is called $ITERATIONS times in parallel by 3 * number
# of processors on the host.
echo "##### Creating $TOTAL_PROVIDER_COUNT providers" | tee -a $LOG
seq 1 $ITERATIONS | parallel -P 3% $LOADER $PLACEMENT_URL $GABBIT
set +x
rp_count=$(curl -H 'x-auth-token: admin' ${PLACEMENT_URL}/resource_providers |json_pp|grep -c '"name"')
# Skip curl and note if we failed to create the required number of rps
if [[ $rp_count -ge $TOTAL_PROVIDER_COUNT ]]; then
load_candidates
else
(
echo "Unable to create expected number of resource providers. Expected: ${COUNT}, Got: $rp_count"
echo "See job-output.txt.gz and logs/screen-placement-api.txt.gz for additional detail."
) | tee -a $LOG
code=1
fi
set -x
deactivate
exit $code
}
check_placement

30
gate/perfload-server.sh Executable file
View File

@ -0,0 +1,30 @@
#!/bin/bash -x
WORK_DIR=$1
# create database
sudo debconf-set-selections <<MYSQL_PRESEED
mysql-server mysql-server/root_password password secret
mysql-server mysql-server/root_password_again password secret
mysql-server mysql-server/start_on_boot boolean true
MYSQL_PRESEED
sudo apt-get install -y mysql-server mysql-client libmysqlclient-dev jq parallel
sudo mysql -uroot -psecret -e "DROP DATABASE IF EXISTS placement;"
sudo mysql -uroot -psecret -e "CREATE DATABASE placement CHARACTER SET utf8;"
sudo mysql -uroot -psecret -e "GRANT ALL PRIVILEGES ON placement.* TO 'root'@'%' identified by 'secret';"
# Create a virtualenv for placement to run in
python -m virtualenv -p python3 .placement
. .placement/bin/activate
pip install . PyMySQL uwsgi
# set config via environment
export OS_PLACEMENT_DATABASE__CONNECTION=mysql+pymysql://root:secret@127.0.0.1/placement?charset=utf8
export OS_PLACEMENT_DATABASE__MAX_POOL_SIZE=25
export OS_PLACEMENT_DATABASE__MAX_OVERFLOW=100
export OS_PLACEMENT_DATABASE__SYNC_ON_STARTUP=True
# Increase our chances of allocating to different providers.
export OS_PLACEMENT_PLACEMENT__RANDOMIZE_ALLOCATION_CANDIDATES=True
export OS_DEFAULT__DEBUG=True
export OS_API__AUTH_STRATEGY=noauth2
uwsgi --http :8000 --wsgi-file .placement/bin/placement-api --daemonize ${WORK_DIR}/logs/placement-api.log --processes 5 --threads 25

View File

@ -0,0 +1,20 @@
- hosts: all
tasks:
- name: Ensure {{ ansible_user_dir }}/logs exists
become: true
file:
path: "{{ ansible_user_dir }}/logs"
state: directory
owner: "{{ ansible_user }}"
- name: start placement
args:
chdir: "{{ ansible_user_dir }}/src/opendev.org/openstack/placement"
shell:
executable: /bin/bash
cmd: gate/perfload-server.sh {{ ansible_user_dir }}
- name: placement performance
args:
chdir: "{{ ansible_user_dir }}/src/opendev.org/openstack/placement"
shell:
executable: /bin/bash
cmd: gate/perfload-nested-runner.sh {{ ansible_user_dir }}