[evil] Remove OpenStack related plugins

NOTE: the existing legacy jobs should continue working due to several
applied hacks.

The new home for OpenStack plugins:
  https://git.openstack.org/cgit/openstack/rally-openstack
  https://github.com/openstack/rally-openstack
  https://pypi.org/project/rally-openstack/

Change-Id: I36e56759f02fe9560b454b7a396e0493da0be0ab
This commit is contained in:
Andrey Kurilin 2018-05-08 16:37:21 +03:00
parent 86cd7dfa5c
commit 7addc521c1
1316 changed files with 91 additions and 98204 deletions

View File

@ -1,20 +0,0 @@
---
service_list:
- authentication
- nova
- neutron
- keystone
- cinder
- glance
use_existing_users: false
image_name: "^(cirros.*-disk|TestVM)$"
flavor_name: "m1.tiny"
glance_image_location: "http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img"
smoke: true
users_amount: 1
tenants_amount: 1
controllers_amount: 1
compute_amount: 1
storage_amount: 1
network_amount: 1

File diff suppressed because it is too large Load Diff

View File

@ -1,46 +0,0 @@
heat_template_version: 2013-05-23
parameters:
flavor:
type: string
default: m1.tiny
constraints:
- custom_constraint: nova.flavor
image:
type: string
default: cirros-0.3.5-x86_64-disk
constraints:
- custom_constraint: glance.image
scaling_adjustment:
type: number
default: 1
max_size:
type: number
default: 5
constraints:
- range: {min: 1}
resources:
asg:
type: OS::Heat::AutoScalingGroup
properties:
resource:
type: OS::Nova::Server
properties:
image: { get_param: image }
flavor: { get_param: flavor }
min_size: 1
desired_capacity: 3
max_size: { get_param: max_size }
scaling_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: {get_resource: asg}
scaling_adjustment: { get_param: scaling_adjustment }
outputs:
scaling_url:
value: {get_attr: [scaling_policy, alarm_url]}

View File

@ -1,17 +0,0 @@
heat_template_version: 2013-05-23
resources:
test_group:
type: OS::Heat::AutoScalingGroup
properties:
desired_capacity: 0
max_size: 0
min_size: 0
resource:
type: OS::Heat::RandomString
test_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: { get_resource: test_group }
scaling_adjustment: 1

View File

@ -1 +0,0 @@
heat_template_version: 2014-10-16

View File

@ -1 +0,0 @@
{"input1": "value1", "some_json_input": {"a": "b"}}

View File

@ -1 +0,0 @@
{"env": {"env_param": "env_param_value"}}

View File

@ -1,16 +0,0 @@
---
version: "2.0"
name: wb
workflows:
wf1:
type: direct
input:
- input1: input1
- some_json_input: {}
tasks:
hello:
action: std.echo output="Hello"
publish:
result: $

View File

@ -1,25 +0,0 @@
Namespaces:
=: io.murano.apps
std: io.murano
sys: io.murano.system
Name: HelloReporter
Extends: std:Application
Properties:
name:
Contract: $.string().notNull()
Workflow:
initialize:
Body:
- $.environment: $.find(std:Environment).require()
deploy:
Body:
- If: not $.getAttr(deployed, false)
Then:
- $.environment.reporter.report($this, 'Starting deployment! Hello!')
- $.setAttr(deployed, True)

View File

@ -1,23 +0,0 @@
Version: 2
Application:
?:
type: io.murano.apps.HelloReporter
name: $.appConfiguration.name
Forms:
- appConfiguration:
fields:
- name: name
type: string
label: Application Name
description: >-
Enter a desired name for the application. Just A-Z, a-z, 0-9, dash and
underline are allowed
- name: unitNamingPattern
type: string
required: false
hidden: true
widgetMedia:
js: ['muranodashboard/js/support_placeholder.js']
css: {all: ['muranodashboard/css/support_placeholder.css']}

View File

@ -1,10 +0,0 @@
Format: 1.0
Type: Application
FullName: io.murano.apps.HelloReporter
Name: HelloReporter
Description: |
HelloReporter test app.
Author: 'Mirantis, Inc'
Tags: []
Classes:
io.murano.apps.HelloReporter: HelloReporter.yaml

View File

@ -1,17 +0,0 @@
Murano applications
===================
Files for Murano plugins
Structure
---------
* <application_name>/ directories. Each directory store a simple Murano package
for environment deployment in Murano context. Also there can be other files
needs for application.
Useful links
------------
* `More about Murano package <https://wiki.openstack.org/wiki/Murano/Documentation/How_to_create_application_package>`_

View File

@ -1,13 +0,0 @@
heat_template_version: 2014-10-16
description: Test template for rally create-update-delete scenario
resources:
test_string_one:
type: OS::Heat::RandomString
properties:
length: 20
test_string_two:
type: OS::Heat::RandomString
properties:
length: 20

View File

@ -1,13 +0,0 @@
heat_template_version: 2014-10-16
description: Test template for rally create-update-delete scenario
resources:
test_group:
type: OS::Heat::ResourceGroup
properties:
count: 2
resource_def:
type: OS::Heat::RandomString
properties:
length: 20

View File

@ -1,44 +0,0 @@
heat_template_version: 2014-10-16
description: >
Test template that creates a resource group with servers and volumes.
The template allows to create a lot of nested stacks with standard
configuration: nova instance, cinder volume attached to that instance
parameters:
num_instances:
type: number
description: number of instances that should be created in resource group
constraints:
- range: {min: 1}
instance_image:
type: string
default: cirros-0.3.5-x86_64-disk
instance_volume_size:
type: number
description: Size of volume to attach to instance
default: 1
constraints:
- range: {min: 1, max: 1024}
instance_flavor:
type: string
description: Type of the instance to be created.
default: m1.tiny
instance_availability_zone:
type: string
description: The Availability Zone to launch the instance.
default: nova
resources:
group_of_volumes:
type: OS::Heat::ResourceGroup
properties:
count: {get_param: num_instances}
resource_def:
type: /home/jenkins/.rally/extra/server_with_volume.yaml.template
properties:
image: {get_param: instance_image}
volume_size: {get_param: instance_volume_size}
flavor: {get_param: instance_flavor}
availability_zone: {get_param: instance_availability_zone}

View File

@ -1,21 +0,0 @@
heat_template_version: 2013-05-23
description: Template for testing caching.
parameters:
count:
type: number
default: 40
delay:
type: number
default: 0.3
resources:
rg:
type: OS::Heat::ResourceGroup
properties:
count: {get_param: count}
resource_def:
type: OS::Heat::TestResource
properties:
constraint_prop_secs: {get_param: delay}

View File

@ -1,37 +0,0 @@
heat_template_version: 2013-05-23
parameters:
attr_wait_secs:
type: number
default: 0.5
resources:
rg:
type: OS::Heat::ResourceGroup
properties:
count: 10
resource_def:
type: OS::Heat::TestResource
properties:
attr_wait_secs: {get_param: attr_wait_secs}
outputs:
val1:
value: {get_attr: [rg, resource.0.output]}
val2:
value: {get_attr: [rg, resource.1.output]}
val3:
value: {get_attr: [rg, resource.2.output]}
val4:
value: {get_attr: [rg, resource.3.output]}
val5:
value: {get_attr: [rg, resource.4.output]}
val6:
value: {get_attr: [rg, resource.5.output]}
val7:
value: {get_attr: [rg, resource.6.output]}
val8:
value: {get_attr: [rg, resource.7.output]}
val9:
value: {get_attr: [rg, resource.8.output]}
val10:
value: {get_attr: [rg, resource.9.output]}

View File

@ -1,64 +0,0 @@
heat_template_version: 2013-05-23
parameters:
# set all correct defaults for parameters before launch test
public_net:
type: string
default: public
image:
type: string
default: cirros-0.3.5-x86_64-disk
flavor:
type: string
default: m1.tiny
cidr:
type: string
default: 11.11.11.0/24
resources:
server:
type: OS::Nova::Server
properties:
image: {get_param: image}
flavor: {get_param: flavor}
networks:
- port: { get_resource: server_port }
router:
type: OS::Neutron::Router
properties:
external_gateway_info:
network: {get_param: public_net}
router_interface:
type: OS::Neutron::RouterInterface
properties:
router_id: { get_resource: router }
subnet_id: { get_resource: private_subnet }
private_net:
type: OS::Neutron::Net
private_subnet:
type: OS::Neutron::Subnet
properties:
network: { get_resource: private_net }
cidr: {get_param: cidr}
port_security_group:
type: OS::Neutron::SecurityGroup
properties:
name: default_port_security_group
description: >
Default security group assigned to port. The neutron default group is not
used because neutron creates several groups with the same name=default and
nova cannot chooses which one should it use.
server_port:
type: OS::Neutron::Port
properties:
network: {get_resource: private_net}
fixed_ips:
- subnet: { get_resource: private_subnet }
security_groups:
- { get_resource: port_security_group }

View File

@ -1,39 +0,0 @@
heat_template_version: 2013-05-23
parameters:
# set all correct defaults for parameters before launch test
image:
type: string
default: cirros-0.3.5-x86_64-disk
flavor:
type: string
default: m1.tiny
availability_zone:
type: string
description: The Availability Zone to launch the instance.
default: nova
volume_size:
type: number
description: Size of the volume to be created.
default: 1
constraints:
- range: { min: 1, max: 1024 }
description: must be between 1 and 1024 Gb.
resources:
server:
type: OS::Nova::Server
properties:
image: {get_param: image}
flavor: {get_param: flavor}
cinder_volume:
type: OS::Cinder::Volume
properties:
size: { get_param: volume_size }
availability_zone: { get_param: availability_zone }
volume_attachment:
type: OS::Cinder::VolumeAttachment
properties:
volume_id: { get_resource: cinder_volume }
instance_uuid: { get_resource: server}
mountpoint: /dev/vdc

View File

@ -1,23 +0,0 @@
heat_template_version: 2013-05-23
description: >
Test template for create-update-delete-stack scenario in rally.
The template updates resource parameters without resource re-creation(replacement)
in the stack defined by autoscaling_policy.yaml.template. It allows to measure
performance of "pure" resource update operation only.
resources:
test_group:
type: OS::Heat::AutoScalingGroup
properties:
desired_capacity: 0
max_size: 0
min_size: 0
resource:
type: OS::Heat::RandomString
test_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: { get_resource: test_group }
scaling_adjustment: -1

View File

@ -1,19 +0,0 @@
heat_template_version: 2014-10-16
description: >
Test template for create-update-delete-stack scenario in rally.
The template updates the stack defined by random_strings.yaml.template with additional resource.
resources:
test_string_one:
type: OS::Heat::RandomString
properties:
length: 20
test_string_two:
type: OS::Heat::RandomString
properties:
length: 20
test_string_three:
type: OS::Heat::RandomString
properties:
length: 20

View File

@ -1,11 +0,0 @@
heat_template_version: 2014-10-16
description: >
Test template for create-update-delete-stack scenario in rally.
The template deletes one resource from the stack defined by random_strings.yaml.template.
resources:
test_string_one:
type: OS::Heat::RandomString
properties:
length: 20

View File

@ -1,19 +0,0 @@
heat_template_version: 2014-10-16
description: >
Test template for create-update-delete-stack scenario in rally.
The template deletes one resource from the stack defined by
random_strings.yaml.template and re-creates it with the updated parameters
(so-called update-replace). That happens because some parameters cannot be
changed without resource re-creation. The template allows to measure performance
of update-replace operation.
resources:
test_string_one:
type: OS::Heat::RandomString
properties:
length: 20
test_string_two:
type: OS::Heat::RandomString
properties:
length: 40

View File

@ -1,16 +0,0 @@
heat_template_version: 2014-10-16
description: >
Test template for create-update-delete-stack scenario in rally.
The template updates one resource from the stack defined by resource_group.yaml.template
and adds children resources to that resource.
resources:
test_group:
type: OS::Heat::ResourceGroup
properties:
count: 3
resource_def:
type: OS::Heat::RandomString
properties:
length: 20

View File

@ -1,16 +0,0 @@
heat_template_version: 2014-10-16
description: >
Test template for create-update-delete-stack scenario in rally.
The template updates one resource from the stack defined by resource_group.yaml.template
and deletes children resources from that resource.
resources:
test_group:
type: OS::Heat::ResourceGroup
properties:
count: 1
resource_def:
type: OS::Heat::RandomString
properties:
length: 20

View File

@ -1,219 +0,0 @@
heat_template_version: 2014-10-16
description: >
Heat WordPress template to support F23, using only Heat OpenStack-native
resource types, and without the requirement for heat-cfntools in the image.
WordPress is web software you can use to create a beautiful website or blog.
This template installs a single-instance WordPress deployment using a local
MySQL database to store the data.
parameters:
wp_instances_count:
type: number
default: 1
timeout:
type: number
description: Timeout for WaitCondition, seconds
default: 1000
router_id:
type: string
description: ID of the router
default: b9135c24-d998-4e2f-b0aa-2b0a40c21ae5
network_id:
type: string
description: ID of the network to allocate floating IP from
default: 4eabc459-0096-4479-b105-67ec0cff18cb
key_name:
type: string
description : Name of a KeyPair to enable SSH access to the instance
default: nova-kp
wp_instance_type:
type: string
description: Instance type for WordPress server
default: m1.small
wp_image:
type: string
description: >
Name or ID of the image to use for the WordPress server.
Recommended value is fedora-23.x86_64;
http://cloud.fedoraproject.org/fedora-23.x86_64.qcow2.
default: fedora-23.x86_64
image:
type: string
description: >
Name or ID of the image to use for the gate-node.
default: fedora-23.x86_64
instance_type:
type: string
description: Instance type for gate-node.
default: m1.small
db_name:
type: string
description: WordPress database name
default: wordpress
constraints:
- length: { min: 1, max: 64 }
description: db_name must be between 1 and 64 characters
- allowed_pattern: '[a-zA-Z][a-zA-Z0-9]*'
description: >
db_name must begin with a letter and contain only alphanumeric
characters
db_username:
type: string
description: The WordPress database admin account username
default: admin
hidden: true
constraints:
- length: { min: 1, max: 16 }
description: db_username must be between 1 and 16 characters
- allowed_pattern: '[a-zA-Z][a-zA-Z0-9]*'
description: >
db_username must begin with a letter and contain only alphanumeric
characters
db_password:
type: string
description: The WordPress database admin account password
default: admin
hidden: true
constraints:
- length: { min: 1, max: 41 }
description: db_password must be between 1 and 41 characters
- allowed_pattern: '[a-zA-Z0-9]*'
description: db_password must contain only alphanumeric characters
db_root_password:
type: string
description: Root password for MySQL
default: admin
hidden: true
constraints:
- length: { min: 1, max: 41 }
description: db_root_password must be between 1 and 41 characters
- allowed_pattern: '[a-zA-Z0-9]*'
description: db_root_password must contain only alphanumeric characters
resources:
wordpress_instances:
type: OS::Heat::ResourceGroup
properties:
count: {get_param: wp_instances_count}
resource_def:
type: wp-instances.yaml
properties:
name: wp_%index%
image: { get_param: wp_image }
flavor: { get_param: wp_instance_type }
key_name: { get_param: key_name }
db_root_password: { get_param: db_root_password }
db_name: { get_param: db_name }
db_username: { get_param: db_username }
db_password: { get_param: db_password }
wc_notify: { get_attr: ['wait_handle', 'curl_cli'] }
subnet: {get_resource: subnet}
network: {get_resource: network}
security_group: {get_resource: security_group}
gate_instance:
type: OS::Nova::Server
properties:
image: { get_param: image }
flavor: { get_param: instance_type }
key_name: { get_param: key_name }
networks:
- port: {get_resource: port_gate}
user_data_format: RAW
user_data: |
#cloud-config
packages:
- python
- siege
- httpd-tools
security_group:
type: OS::Neutron::SecurityGroup
properties:
rules:
- port_range_max: null
port_range_min: null
protocol: icmp
remote_ip_prefix: 0.0.0.0/0
- port_range_max: 80
port_range_min: 80
protocol: tcp
remote_ip_prefix: 0.0.0.0/0
- port_range_max: 443
port_range_min: 443
protocol: tcp
remote_ip_prefix: 0.0.0.0/0
- port_range_max: 22
port_range_min: 22
protocol: tcp
remote_ip_prefix: 0.0.0.0/0
network:
type: OS::Neutron::Net
properties:
name: wordpress-network
subnet:
type: OS::Neutron::Subnet
properties:
cidr: 10.0.0.1/24
dns_nameservers: [8.8.8.8]
ip_version: 4
network: {get_resource: network}
port_gate:
type: OS::Neutron::Port
properties:
fixed_ips:
- subnet: {get_resource: subnet}
network: {get_resource: network}
replacement_policy: AUTO
security_groups:
- {get_resource: security_group}
floating_ip:
type: OS::Neutron::FloatingIP
properties:
port_id: {get_resource: port_gate}
floating_network: {get_param: network_id}
router_interface:
type: OS::Neutron::RouterInterface
properties:
router_id: {get_param: router_id}
subnet: {get_resource: subnet}
wait_condition:
type: OS::Heat::WaitCondition
properties:
handle: {get_resource: wait_handle}
count: {get_param: wp_instances_count}
timeout: {get_param: timeout}
wait_handle:
type: OS::Heat::WaitConditionHandle
outputs:
curl_cli:
value: { get_attr: ['wait_handle', 'curl_cli'] }
wp_nodes:
value: { get_attr: ['wordpress_instances', 'attributes', 'ip'] }
gate_node:
value: { get_attr: ['floating_ip', 'floating_ip_address'] }
net_name:
value: { get_attr: ['network', 'name'] }

View File

@ -1,82 +0,0 @@
heat_template_version: 2014-10-16
parameters:
name: { type: string }
wc_notify: { type: string }
subnet: { type: string }
network: { type: string }
security_group: { type: string }
key_name: { type: string }
flavor: { type: string }
image: { type: string }
db_name: { type: string }
db_username: { type: string }
db_password: { type: string }
db_root_password: { type: string }
resources:
wordpress_instance:
type: OS::Nova::Server
properties:
name: { get_param: name }
image: { get_param: image }
flavor: { get_param: flavor }
key_name: { get_param: key_name }
networks:
- port: {get_resource: port}
user_data_format: RAW
user_data:
str_replace:
template: |
#!/bin/bash -v
sudo yum -y install mariadb mariadb-server httpd wordpress curl
sudo touch /var/log/mariadb/mariadb.log
sudo chown mysql.mysql /var/log/mariadb/mariadb.log
sudo systemctl start mariadb.service
# Setup MySQL root password and create a user
sudo mysqladmin -u root password db_rootpassword
cat << EOF | mysql -u root --password=db_rootpassword
CREATE DATABASE db_name;
GRANT ALL PRIVILEGES ON db_name.* TO "db_user"@"localhost"
IDENTIFIED BY "db_password";
FLUSH PRIVILEGES;
EXIT
EOF
sudo sed -i "/Deny from All/d" /etc/httpd/conf.d/wordpress.conf
sudo sed -i "s/Require local/Require all granted/" /etc/httpd/conf.d/wordpress.conf
sudo sed -i s/database_name_here/db_name/ /etc/wordpress/wp-config.php
sudo sed -i s/username_here/db_user/ /etc/wordpress/wp-config.php
sudo sed -i s/password_here/db_password/ /etc/wordpress/wp-config.php
sudo systemctl start httpd.service
IP=$(ip r get 8.8.8.8 | grep src | awk '{print $7}')
curl --data 'user_name=admin&password=123&password2=123&admin_email=asd@asd.com' http://$IP/wordpress/wp-admin/install.php?step=2
mkfifo /tmp/data
(for i in $(seq 1000); do
echo -n "1,$i,$i,page,"
head -c 100000 /dev/urandom | base64 -w 0
echo
done
) > /tmp/data &
mysql -u root --password=db_rootpassword wordpress -e 'LOAD DATA LOCAL INFILE "/tmp/data" INTO TABLE wp_posts FIELDS TERMINATED BY "," (post_author,post_title,post_name,post_type,post_content);'
sudo sh -c 'echo "172.16.0.6 mos80-ssl.fuel.local" >> /etc/hosts'
wc_notify --insecure --data-binary '{"status": "SUCCESS"}'
params:
db_rootpassword: { get_param: db_root_password }
db_name: { get_param: db_name }
db_user: { get_param: db_username }
db_password: { get_param: db_password }
wc_notify: { get_param: wc_notify }
port:
type: OS::Neutron::Port
properties:
fixed_ips:
- subnet: {get_param: subnet}
network: {get_param: network}
replacement_policy: AUTO
security_groups:
- {get_param: security_group}
outputs:
ip:
value: { get_attr: ['wordpress_instance', 'networks'] }

View File

@ -1,297 +0,0 @@
---
version: 2
title: Task for gate-rally-dsvm-rally-heat-nv job
description: >
This task contains various scenarios for testing heat plugins
subtasks:
-
title: HeatStacks.create_and_list_stack tests
scenario:
HeatStacks.create_and_list_stack:
template_path: "~/.rally/extra/default.yaml.template"
runner:
constant:
times: 6
concurrency: 3
contexts:
users:
tenants: 1
users_per_tenant: 1
-
title: HeatStacks.create_and_delete_stack tests
workloads:
-
scenario:
HeatStacks.create_and_delete_stack:
template_path: "~/.rally/extra/default.yaml.template"
runner:
constant:
times: 6
concurrency: 3
contexts:
users:
tenants: 2
users_per_tenant: 3
-
scenario:
HeatStacks.create_and_delete_stack:
template_path: "~/.rally/extra/server_with_volume.yaml.template"
runner:
constant:
times: 2
concurrency: 2
contexts:
users:
tenants: 2
users_per_tenant: 2
-
scenario:
HeatStacks.create_and_delete_stack:
template_path: "~/.rally/extra/resource_group_server_with_volume.yaml.template"
parameters:
num_instances: 2
files: ["~/.rally/extra/server_with_volume.yaml.template"]
runner:
constant:
times: 2
concurrency: 2
contexts:
users:
tenants: 2
users_per_tenant: 1
-
scenario:
HeatStacks.create_and_delete_stack:
template_path: "~/.rally/extra/resource_group_with_constraint.yaml.template"
runner:
constant:
times: 6
concurrency: 3
contexts:
users:
tenants: 2
users_per_tenant: 2
-
title: HeatStacks.create_check_delete_stack tests
scenario:
HeatStacks.create_check_delete_stack:
template_path: "~/.rally/extra/random_strings.yaml.template"
runner:
constant:
times: 6
concurrency: 3
contexts:
users:
tenants: 2
users_per_tenant: 2
-
title: HeatStacks.create_update_delete_stack tests
workloads:
-
scenario:
HeatStacks.create_update_delete_stack:
template_path: "~/.rally/extra/random_strings.yaml.template"
updated_template_path: "~/.rally/extra/updated_random_strings_add.yaml.template"
runner:
constant:
times: 6
concurrency: 3
contexts:
users:
tenants: 2
users_per_tenant: 2
-
scenario:
HeatStacks.create_update_delete_stack:
template_path: "~/.rally/extra/random_strings.yaml.template"
updated_template_path: "~/.rally/extra/updated_random_strings_delete.yaml.template"
runner:
constant:
times: 6
concurrency: 3
contexts:
users:
tenants: 2
users_per_tenant: 2
-
scenario:
HeatStacks.create_update_delete_stack:
template_path: "~/.rally/extra/random_strings.yaml.template"
updated_template_path: "~/.rally/extra/updated_random_strings_replace.yaml.template"
runner:
constant:
times: 6
concurrency: 3
contexts:
users:
tenants: 2
users_per_tenant: 2
-
scenario:
HeatStacks.create_update_delete_stack:
template_path: "~/.rally/extra/autoscaling_policy.yaml.template"
updated_template_path: "~/.rally/extra/updated_autoscaling_policy_inplace.yaml.template"
runner:
constant:
times: 6
concurrency: 3
contexts:
users:
tenants: 2
users_per_tenant: 2
-
scenario:
HeatStacks.create_update_delete_stack:
template_path: "~/.rally/extra/resource_group.yaml.template"
updated_template_path: "~/.rally/extra/updated_resource_group_increase.yaml.template"
runner:
constant:
times: 6
concurrency: 3
contexts:
users:
tenants: 2
users_per_tenant: 3
-
scenario:
HeatStacks.create_update_delete_stack:
template_path: "~/.rally/extra/resource_group.yaml.template"
updated_template_path: "~/.rally/extra/updated_resource_group_reduce.yaml.template"
runner:
constant:
times: 6
concurrency: 3
contexts:
users:
tenants: 2
users_per_tenant: 3
-
title: HeatStacks.create_suspend_resume_delete_stack tests
scenario:
HeatStacks.create_suspend_resume_delete_stack:
template_path: "~/.rally/extra/random_strings.yaml.template"
runner:
constant:
times: 6
concurrency: 3
contexts:
users:
tenants: 2
users_per_tenant: 3
-
title: HeatStacks.create_snapshot_restore_delete_stack tests
scenario:
HeatStacks.create_snapshot_restore_delete_stack:
template_path: "~/.rally/extra/random_strings.yaml.template"
runner:
constant:
times: 6
concurrency: 3
contexts:
users:
tenants: 2
users_per_tenant: 3
-
title: HeatStacks.create_stack_and_scale tests
workloads:
-
scenario:
HeatStacks.create_stack_and_scale:
template_path: "~/.rally/extra/autoscaling_group.yaml.template"
output_key: "scaling_url"
delta: 1
parameters:
scaling_adjustment: 1
runner:
constant:
times: 2
concurrency: 1
contexts:
users:
tenants: 2
users_per_tenant: 1
-
scenario:
HeatStacks.create_stack_and_scale:
template_path: "~/.rally/extra/autoscaling_group.yaml.template"
output_key: "scaling_url"
delta: -1
parameters:
scaling_adjustment: -1
runner:
constant:
times: 2
concurrency: 2
contexts:
users:
tenants: 2
users_per_tenant: 1
-
title: HeatStacks.create_stack_and_list_output tests
scenario:
HeatStacks.create_stack_and_list_output:
template_path: "~/.rally/extra/resource_group_with_outputs.yaml.template"
runner:
constant:
times: 4
concurrency: 2
contexts:
users:
tenants: 2
users_per_tenant: 2
-
title: HeatStacks.create_stack_and_list_output_via_API tests
scenario:
HeatStacks.create_stack_and_list_output_via_API:
template_path: "~/.rally/extra/resource_group_with_outputs.yaml.template"
runner:
constant:
times: 4
concurrency: 2
contexts:
users:
tenants: 2
users_per_tenant: 2
-
title: HeatStacks.create_stack_and_show_output tests
scenario:
HeatStacks.create_stack_and_show_output:
template_path: "~/.rally/extra/resource_group_with_outputs.yaml.template"
output_key: "val1"
runner:
constant:
times: 4
concurrency: 2
contexts:
users:
tenants: 2
users_per_tenant: 2
-
title: HeatStacks.create_stack_and_show_output_via_API tests
scenario:
HeatStacks.create_stack_and_show_output_via_API:
template_path: "~/.rally/extra/resource_group_with_outputs.yaml.template"
output_key: "val1"
runner:
constant:
times: 4
concurrency: 2
contexts:
users:
tenants: 2
users_per_tenant: 2
-
title: Authenticate.validate_heat tests
scenario:
Authenticate.validate_heat:
repetitions: 2
runner:
constant:
times: 10
concurrency: 5
contexts:
users:
tenants: 3
users_per_tenant: 5

View File

@ -1,182 +0,0 @@
---
version: 2
title: Task for gate-rally-dsvm-designate-rally-pdns4-ubuntu-xenial-nv job
description: >
This task contains various scenarios for testing designate plugins
subtasks:
-
title: DesignateBasic.create_and_delete_domain tests
scenario:
DesignateBasic.create_and_delete_domain: {}
runner:
constant:
times: 4
concurrency: 2
contexts:
users:
tenants: 2
users_per_tenant: 2
-
title: DesignateBasic.create_and_update_domain tests
scenario:
DesignateBasic.create_and_update_domain: {}
runner:
constant:
times: 4
concurrency: 2
contexts:
users:
tenants: 2
users_per_tenant: 2
-
title: DesignateBasic.create_and_delete_records tests
scenario:
DesignateBasic.create_and_delete_records:
records_per_domain: 5
runner:
constant:
times: 4
concurrency: 2
contexts:
users:
tenants: 2
users_per_tenant: 2
-
title: DesignateBasic.create_and_list_domains tests
scenario:
DesignateBasic.create_and_list_domains: {}
runner:
constant:
times: 4
concurrency: 2
contexts:
users:
tenants: 2
users_per_tenant: 2
-
title: DesignateBasic.create_and_list_records tests
scenario:
DesignateBasic.create_and_list_records:
records_per_domain: 5
runner:
constant:
times: 4
concurrency: 2
contexts:
users:
tenants: 2
users_per_tenant: 2
-
title: DesignateBasic.list_domains tests
scenario:
DesignateBasic.list_domains: {}
runner:
constant:
times: 3
concurrency: 2
contexts:
users:
tenants: 2
users_per_tenant: 2
-
title: DesignateBasic.create_and_list_servers tests
scenario:
DesignateBasic.create_and_list_servers: {}
runner:
constant:
times: 4
concurrency: 1
contexts:
users:
tenants: 2
users_per_tenant: 2
-
title: DesignateBasic.create_and_delete_server tests
scenario:
DesignateBasic.create_and_delete_server: {}
runner:
constant:
times: 4
concurrency: 1
contexts:
users:
tenants: 2
users_per_tenant: 2
-
title: DesignateBasic.list_servers tests
scenario:
DesignateBasic.list_servers: {}
runner:
constant:
times: 4
concurrency: 2
contexts:
users:
tenants: 2
users_per_tenant: 2
-
title: DesignateBasic.create_and_list_zones tests
scenario:
DesignateBasic.create_and_list_zones: {}
runner:
constant:
times: 4
concurrency: 2
contexts:
users:
tenants: 2
users_per_tenant: 2
-
title: DesignateBasic.create_and_delete_zone tests
scenario:
DesignateBasic.create_and_delete_zone: {}
runner:
constant:
times: 4
concurrency: 2
contexts:
users:
tenants: 2
users_per_tenant: 2
-
title: DesignateBasic.create_and_list_recordsets tests
scenario:
DesignateBasic.create_and_list_recordsets: {}
runner:
constant:
times: 4
concurrency: 2
contexts:
users:
tenants: 2
users_per_tenant: 2
zones:
zones_per_tenant: 1
-
title: DesignateBasic.create_and_delete_recordsets tests
scenario:
DesignateBasic.create_and_delete_recordsets: {}
runner:
constant:
times: 4
concurrency: 2
contexts:
users:
tenants: 2
users_per_tenant: 2
zones:
zones_per_tenant: 1
-
title: DesignateBasic.list_zones tests
scenario:
DesignateBasic.list_zones: {}
runner:
constant:
times: 4
concurrency: 2
contexts:
users:
tenants: 2
users_per_tenant: 2
zones:
zones_per_tenant: 10

View File

@ -1,36 +0,0 @@
---
version: 2
title: Task for gate-rally-dsvm-ironic-rally-nv job
description: >
This task contains various scenarios for testing ironic plugins
subtasks:
-
title: IronicNodes.create_and_list_node tests
scenario:
IronicNodes.create_and_list_node:
driver: "fake"
properties:
capabilities: "boot_option:local"
runner:
constant:
times: 100
concurrency: 20
contexts:
users:
tenants: 5
users_per_tenant: 1
-
title: IronicNodes.create_and_delete_node tests
scenario:
IronicNodes.create_and_delete_node:
driver: "fake"
properties:
capabilities: "boot_option:local"
runner:
constant:
times: 100
concurrency: 20
contexts:
users:
tenants: 5
users_per_tenant: 1

View File

@ -1 +0,0 @@
rally.yaml

View File

@ -1,78 +0,0 @@
{% set image = "Fedora-Atomic-26-20170723.0.x86_64" %}
---
version: 2
title: Task for gate-rally-dsvm-magnum-rally-nv job
description: >
This task contains various subtasks for testing magnum plugins
subtasks:
-
title: MagnumClusterTemplates.list_cluster_templates tests
workloads:
-
scenario:
MagnumClusterTemplates.list_cluster_templates: {}
runner:
constant:
times: 40
concurrency: 20
contexts:
users:
tenants: 1
users_per_tenant: 1
cluster_templates:
image_id: {{ image }}
flavor_id: "m1.small"
master_flavor_id: "m1.small"
external_network_id: "public"
dns_nameserver: "8.8.8.8"
docker_volume_size: 5
coe: "kubernetes"
network_driver: "flannel"
docker_storage_driver: "devicemapper"
master_lb_enabled: False
-
scenario:
MagnumClusterTemplates.list_cluster_templates: {}
runner:
constant:
times: 40
concurrency: 20
contexts:
users:
tenants: 1
users_per_tenant: 1
cluster_templates:
image_id: {{ image }}
flavor_id: "m1.small"
master_flavor_id: "m1.small"
external_network_id: "public"
dns_nameserver: "8.8.8.8"
docker_volume_size: 5
coe: "swarm"
network_driver: "docker"
docker_storage_driver: "devicemapper"
master_lb_enabled: False
-
title: MagnumClusters.create_and_list_clusters tests
scenario:
MagnumClusters.create_and_list_clusters:
node_count: 1
runner:
constant:
times: 1
concurrency: 1
contexts:
users:
tenants: 1
users_per_tenant: 1
cluster_templates:
image_id: {{ image }}
flavor_id: "m1.small"
master_flavor_id: "m1.small"
external_network_id: "public"
dns_nameserver: "8.8.8.8"
docker_volume_size: 5
coe: "swarm"
network_driver: "docker"
docker_storage_driver: "devicemapper"
master_lb_enabled: False

View File

@ -1,169 +0,0 @@
---
version: 2
title: Task for gate-rally-dsvm-manila-multibackend-no-ss job
description: >
This task contains various subtasks for testing manila plugins
subtasks:
-
title: Test Manila Quotas context
scenario:
Dummy.openstack: {}
runner:
constant:
times: 1
concurrency: 1
contexts:
users:
tenants: 1
users_per_tenant: 1
quotas:
manila:
shares: -1
gigabytes: -1
snapshots: -1
snapshot_gigabytes: -1
share_networks: -1
-
title: ManilaShares.list_shares tests
scenario:
ManilaShares.list_shares:
detailed: True
runner:
constant:
times: 10
concurrency: 1
contexts:
users:
tenants: 1
users_per_tenant: 1
-
title: ManilaShares.create_share_then_allow_and_deny_access tests
scenario:
ManilaShares.create_share_then_allow_and_deny_access:
share_proto: "nfs"
share_type: "dhss_false"
size: 1
access: "127.0.0.1"
access_type: "ip"
runner:
constant:
times: 2
concurrency: 2
contexts:
quotas:
manila:
shares: -1
gigabytes: -1
users:
tenants: 2
users_per_tenant: 1
-
title: ManilaShares.create_and_delete_share tests
scenario:
ManilaShares.create_and_delete_share:
share_proto: "nfs"
size: 1
share_type: "dhss_false"
min_sleep: 1
max_sleep: 2
runner:
constant:
times: 4
concurrency: 4
contexts:
quotas:
manila:
shares: -1
gigabytes: -1
users:
tenants: 2
users_per_tenant: 1
-
title: ManilaShares.create_and_list_share tests
scenario:
ManilaShares.create_and_list_share:
share_proto: "nfs"
size: 1
share_type: "dhss_false"
min_sleep: 1
max_sleep: 2
runner:
constant:
times: 4
concurrency: 4
contexts:
quotas:
manila:
shares: -1
gigabytes: -1
users:
tenants: 2
users_per_tenant: 1
-
title: ManilaShares.create_and_extend_share tests
scenario:
ManilaShares.create_and_extend_share:
share_proto: "nfs"
size: 1
share_type: "dhss_false"
new_size: 2
runner:
constant:
times: 4
concurrency: 4
contexts:
quotas:
manila:
shares: -1
gigabytes: -1
users:
tenants: 2
users_per_tenant: 1
-
title: ManilaShares.create_and_shrink_share tests
scenario:
ManilaShares.create_and_shrink_share:
share_proto: "nfs"
size: 2
share_type: "dhss_false"
new_size: 1
runner:
constant:
times: 4
concurrency: 4
contexts:
quotas:
manila:
shares: -1
gigabytes: -1
users:
tenants: 2
users_per_tenant: 1
-
title: ManilaShares.set_and_delete_metadata tests
scenario:
ManilaShares.set_and_delete_metadata:
sets: 1
set_size: 3
delete_size: 3
key_min_length: 1
key_max_length: 256
value_min_length: 1
value_max_length: 1024
runner:
constant:
times: 10
concurrency: 10
contexts:
quotas:
manila:
shares: -1
gigabytes: -1
users:
tenants: 1
users_per_tenant: 1
manila_shares:
shares_per_tenant: 1
share_proto: "NFS"
size: 1
share_type: "dhss_false"

View File

@ -1,281 +0,0 @@
---
version: 2
title: Task for gate-rally-dsvm-manila-multibackend-no-ss job
description: >
This task contains various subtasks for testing manila plugins
subtasks:
-
title: Test Manila Quotas context
scenario:
Dummy.openstack: {}
runner:
constant:
times: 1
concurrency: 1
contexts:
users:
tenants: 1
users_per_tenant: 1
quotas:
manila:
shares: -1
gigabytes: -1
snapshots: -1
snapshot_gigabytes: -1
share_networks: -1
-
title: ManilaShares.list_shares tests
scenario:
ManilaShares.list_shares:
detailed: True
runner:
constant:
times: 12
concurrency: 4
contexts:
users:
tenants: 3
users_per_tenant: 4
user_choice_method: "round_robin"
-
title: ManilaShares.create_and_extend_share tests
scenario:
ManilaShares.create_and_extend_share:
share_proto: "nfs"
size: 1
new_size: 2
share_type: "dhss_true"
runner:
constant:
times: 4
concurrency: 4
contexts:
quotas:
manila:
shares: -1
gigabytes: -1
share_networks: -1
users:
tenants: 2
users_per_tenant: 1
user_choice_method: "round_robin"
manila_share_networks:
use_share_networks: True
-
title: ManilaShares.create_and_shrink_share tests
scenario:
ManilaShares.create_and_shrink_share:
share_proto: "nfs"
size: 2
new_size: 1
share_type: "dhss_true"
runner:
constant:
times: 4
concurrency: 4
contexts:
quotas:
manila:
shares: -1
gigabytes: -1
share_networks: -1
users:
tenants: 2
users_per_tenant: 1
user_choice_method: "round_robin"
manila_share_networks:
use_share_networks: True
-
title: ManilaShares.create_share_then_allow_and_deny_access tests
scenario:
ManilaShares.create_share_then_allow_and_deny_access:
share_proto: "nfs"
size: 1
share_type: "dhss_true"
access: "127.0.0.1"
access_type: "ip"
runner:
constant:
times: 4
concurrency: 4
contexts:
quotas:
manila:
shares: -1
gigabytes: -1
share_networks: -1
users:
tenants: 2
users_per_tenant: 1
user_choice_method: "round_robin"
manila_share_networks:
use_share_networks: True
-
title: ManilaShares.create_and_delete_share tests
scenario:
ManilaShares.create_and_delete_share:
share_proto: "nfs"
size: 1
share_type: "dhss_true"
min_sleep: 1
max_sleep: 2
runner:
constant:
times: 4
concurrency: 4
contexts:
quotas:
manila:
shares: -1
gigabytes: -1
share_networks: -1
users:
tenants: 2
users_per_tenant: 1
user_choice_method: "round_robin"
manila_share_networks:
use_share_networks: True
-
title: ManilaShares.create_and_list_share tests
scenario:
ManilaShares.create_and_list_share:
share_proto: "nfs"
size: 1
share_type: "dhss_true"
min_sleep: 1
max_sleep: 2
runner:
constant:
times: 4
concurrency: 4
contexts:
quotas:
manila:
shares: -1
gigabytes: -1
share_networks: -1
users:
tenants: 2
users_per_tenant: 1
user_choice_method: "round_robin"
manila_share_networks:
use_share_networks: True
-
title: ManilaShares.create_share_network_and_delete tests
scenario:
ManilaShares.create_share_network_and_delete:
name: "rally"
runner:
constant:
times: 10
concurrency: 10
contexts:
quotas:
manila:
share_networks: -1
users:
tenants: 2
users_per_tenant: 1
-
title: ManilaShares.create_share_network_and_list tests
scenario:
ManilaShares.create_share_network_and_list:
name: "rally"
detailed: True
search_opts:
name: "rally"
runner:
constant:
times: 10
concurrency: 10
contexts:
quotas:
manila:
share_networks: -1
users:
tenants: 2
users_per_tenant: 1
-
title: ManilaShares.list_share_servers tests
scenario:
ManilaShares.list_share_servers:
search_opts: {}
runner:
constant:
times: 10
concurrency: 10
-
title: ManilaShares.create_security_service_and_delete tests
workloads:
{% for s in ("ldap", "kerberos", "active_directory") %}
-
scenario:
ManilaShares.create_security_service_and_delete:
security_service_type: {{s}}
dns_ip: "fake_dns_ip"
server: "fake-server"
domain: "fake_domain"
user: "fake_user"
password: "fake_password"
name: "fake_name"
description: "fake_description"
runner:
constant:
times: 10
concurrency: 10
contexts:
users:
tenants: 1
users_per_tenant: 1
{% endfor %}
-
title: ManilaShares.attach_security_service_to_share_network tests
workloads:
{% for s in ("ldap", "kerberos", "active_directory") %}
-
scenario:
ManilaShares.attach_security_service_to_share_network:
security_service_type: {{s}}
runner:
constant:
times: 10
concurrency: 10
contexts:
users:
tenants: 1
users_per_tenant: 1
quotas:
manila:
share_networks: -1
{% endfor %}
-
title: ManilaShares.set_and_delete_metadata tests
scenario:
ManilaShares.set_and_delete_metadata:
sets: 1
set_size: 3
delete_size: 3
key_min_length: 1
key_max_length: 256
value_min_length: 1
value_max_length: 1024
runner:
constant:
times: 10
concurrency: 10
contexts:
quotas:
manila:
shares: -1
gigabytes: -1
share_networks: -1
users:
tenants: 1
users_per_tenant: 1
manila_share_networks:
use_share_networks: True
manila_shares:
shares_per_tenant: 1
share_proto: "NFS"
size: 1
share_type: "dhss_true"

View File

@ -1,97 +0,0 @@
---
version: 2
title: Task for gate-rally-dsvm-mistral-rally-ubuntu-xenial-nv job
description: >
This task contains various subtasks for testing mistral plugins
subtasks:
-
title: MistralWorkbooks.list_workbooks tests
scenario:
MistralWorkbooks.list_workbooks: {}
runner:
constant:
times: 50
concurrency: 10
contexts:
users:
tenants: 1
users_per_tenant: 1
-
title: MistralWorkbooks.create_workbook tests
workloads:
-
scenario:
MistralWorkbooks.create_workbook:
definition: "~/.rally/extra/mistral_wb.yaml"
runner:
constant:
times: 50
concurrency: 10
contexts:
users:
tenants: 1
users_per_tenant: 1
-
scenario:
MistralWorkbooks.create_workbook:
definition: "~/.rally/extra/mistral_wb.yaml"
do_delete: true
runner:
constant:
times: 50
concurrency: 10
contexts:
users:
tenants: 1
users_per_tenant: 1
-
title: MistralExecutions.list_executions tests
scenario:
MistralExecutions.list_executions: {}
runner:
constant:
times: 50
concurrency: 10
contexts:
users:
tenants: 2
users_per_tenant: 2
-
title: MistralExecutions.create_execution_from_workbook tests
workloads:
-
description: MistralExecutions.create_execution_from_workbook scenario\
with delete option
scenario:
MistralExecutions.create_execution_from_workbook:
definition: "~/.rally/extra/mistral_wb.yaml"
workflow_name: "wf1"
params: "~/.rally/extra/mistral_params.json"
wf_input: "~/.rally/extra/mistral_input.json"
do_delete: true
runner:
constant:
times: 50
concurrency: 10
contexts:
users:
tenants: 2
users_per_tenant: 2
-
description: MistralExecutions.create_execution_from_workbook scenario\
without delete option
scenario:
MistralExecutions.create_execution_from_workbook:
definition: "~/.rally/extra/mistral_wb.yaml"
workflow_name: "wf1"
params: "~/.rally/extra/mistral_params.json"
wf_input: "~/.rally/extra/mistral_input.json"
do_delete: false
runner:
constant:
times: 50
concurrency: 10
contexts:
users:
tenants: 2
users_per_tenant: 2

View File

@ -1,44 +0,0 @@
---
version: 2
title: Task for gate-rally-dsvm-monasca-rally-ubuntu-xenial-nv job
description: >
This task contains various subtasks for testing Monasca plugins
subtasks:
-
title: MonascaMetrics.list_metrics tests
workloads:
-
scenario:
MonascaMetrics.list_metrics: {}
runner:
constant:
times: 10
concurrency: 2
contexts:
users:
tenants: 2
users_per_tenant: 2
roles:
- "monasca-user"
monasca_metrics:
"dimensions":
"region": "RegionOne"
"service": "identity"
"hostname": "fake_host"
"url": "http://fake_host:5000/v2.0"
"metrics_per_tenant": 10
-
scenario:
MonascaMetrics.list_metrics: {}
runner:
constant:
times: 10
concurrency: 2
contexts:
users:
tenants: 2
users_per_tenant: 2
roles:
- "monasca-user"
monasca_metrics:
"metrics_per_tenant": 10

View File

@ -1,859 +0,0 @@
---
{%- set cirros_image_url = "http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img" %}
{%- set keystone_version = keystone_version|default("v2.0") %}
{% if keystone_version == "v2.0" %}
SaharaNodeGroupTemplates.create_and_list_node_group_templates:
-
args:
hadoop_version: "{{sahara_hadoop_version}}"
flavor:
name: "m1.small"
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
api_versions:
sahara:
service_type: {{sahara_service_type}}
sla:
failure_rate:
max: 0
SaharaNodeGroupTemplates.create_delete_node_group_templates:
-
args:
hadoop_version: "{{sahara_hadoop_version}}"
flavor:
name: "m1.small"
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
api_versions:
sahara:
service_type: {{sahara_service_type}}
sla:
failure_rate:
max: 0
KeystoneBasic.create_and_list_tenants:
-
args: {}
runner:
type: "constant"
times: 1
concurrency: 1
sla:
failure_rate:
max: 0
KeystoneBasic.create_tenant:
-
args: {}
runner:
type: "constant"
times: 1
concurrency: 1
sla:
failure_rate:
max: 0
KeystoneBasic.create_tenant_with_users:
-
args:
users_per_tenant: 10
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
sla:
failure_rate:
max: 0
{% endif %}
KeystoneBasic.create_user:
-
args: {}
runner:
type: "constant"
times: 1
concurrency: 1
sla:
failure_rate:
max: 0
KeystoneBasic.create_delete_user:
-
args: {}
runner:
type: "constant"
times: 1
concurrency: 1
sla:
failure_rate:
max: 0
KeystoneBasic.create_and_list_users:
-
args: {}
runner:
type: "constant"
times: 1
concurrency: 1
sla:
failure_rate:
max: 0
HeatStacks.create_and_list_stack:
-
args:
template_path: "~/.rally/extra/default.yaml.template"
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
HeatStacks.create_and_delete_stack:
-
args:
template_path: "~/.rally/extra/default.yaml.template"
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
Authenticate.keystone:
-
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
Authenticate.validate_cinder:
-
args:
repetitions: 2
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
Authenticate.validate_glance:
-
args:
repetitions: 2
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
Authenticate.validate_heat:
-
args:
repetitions: 2
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
Authenticate.validate_nova:
-
args:
repetitions: 2
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
Quotas.cinder_update_and_delete:
-
args:
max_quota: 1024
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
Quotas.cinder_update:
-
args:
max_quota: 1024
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
Quotas.nova_update_and_delete:
-
args:
max_quota: 1024
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
Quotas.nova_update:
-
args:
max_quota: 1024
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
VMTasks.boot_runcommand_delete:
-
args:
flavor:
name: "m1.tiny"
image:
name: "TestVM|cirros.*uec"
floating_network: "{{external_net}}"
use_floating_ip: true
command:
script_file: "~/.rally/extra/instance_test.sh"
interpreter: "/bin/sh"
username: "cirros"
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
network: {}
sla:
failure_rate:
max: 0
NovaServers.boot_and_delete_server:
-
args:
flavor:
name: "m1.tiny"
image:
name: "TestVM|cirros.*uec"
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
-
args:
auto_assign_nic: true
flavor:
name: "m1.tiny"
image:
name: "TestVM|cirros.*uec"
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
network:
start_cidr: "10.2.0.0/24"
networks_per_tenant: 2
sla:
failure_rate:
max: 0
NovaServers.boot_and_list_server:
-
args:
flavor:
name: "m1.tiny"
image:
name: "TestVM|cirros.*uec"
detailed: True
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
NovaServers.list_servers:
-
args:
detailed: True
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
servers:
flavor:
name: "m1.tiny"
image:
name: "TestVM|cirros.*uec"
servers_per_tenant: 1
sla:
failure_rate:
max: 0
NovaServers.boot_and_bounce_server:
-
args:
flavor:
name: "m1.tiny"
image:
name: "TestVM|cirros.*uec"
actions:
-
hard_reboot: 1
-
stop_start: 1
-
rescue_unrescue: 1
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
NovaServers.boot_server:
-
args:
flavor:
name: "^ram64$"
image:
name: "TestVM|cirros.*uec"
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
flavors:
-
name: "ram64"
ram: 64
sla:
failure_rate:
max: 0
-
args:
flavor:
name: "m1.tiny"
image:
name: "TestVM|cirros.*uec"
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
NeutronNetworks.create_and_list_networks:
-
args:
network_create_args:
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
quotas:
neutron:
network: -1
sla:
failure_rate:
max: 0
NeutronNetworks.create_and_list_subnets:
-
args:
network_create_args:
subnet_create_args:
subnet_cidr_start: "1.1.0.0/30"
subnets_per_network: 2
runner:
type: "constant"
times: 1
concurrency: 1
context:
network: {}
users:
tenants: 1
users_per_tenant: 1
quotas:
neutron:
network: -1
subnet: -1
sla:
failure_rate:
max: 0
NeutronNetworks.create_and_list_routers:
-
args:
network_create_args:
subnet_create_args:
subnet_cidr_start: "1.1.0.0/30"
subnets_per_network: 2
router_create_args:
runner:
type: "constant"
times: 1
concurrency: 1
context:
network: {}
users:
tenants: 1
users_per_tenant: 1
quotas:
neutron:
network: -1
subnet: -1
router: -1
sla:
failure_rate:
max: 0
NeutronNetworks.create_and_list_ports:
-
args:
network_create_args:
port_create_args:
ports_per_network: 4
runner:
type: "constant"
times: 1
concurrency: 1
context:
network: {}
users:
tenants: 1
users_per_tenant: 1
quotas:
neutron:
network: -1
subnet: -1
router: -1
port: -1
sla:
failure_rate:
max: 0
NeutronNetworks.create_and_update_networks:
-
args:
network_create_args: {}
network_update_args:
admin_state_up: False
name: "_updated"
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
quotas:
neutron:
network: -1
sla:
failure_rate:
max: 0
NeutronNetworks.create_and_update_subnets:
-
args:
network_create_args: {}
subnet_create_args: {}
subnet_cidr_start: "1.4.0.0/16"
subnets_per_network: 2
subnet_update_args:
enable_dhcp: False
name: "_subnet_updated"
runner:
type: "constant"
times: 1
concurrency: 1
context:
network: {}
users:
tenants: 5
users_per_tenant: 5
quotas:
neutron:
network: -1
subnet: -1
sla:
failure_rate:
max: 0
NeutronNetworks.create_and_update_routers:
-
args:
network_create_args: {}
subnet_create_args: {}
subnet_cidr_start: "1.1.0.0/30"
subnets_per_network: 2
router_create_args: {}
router_update_args:
admin_state_up: False
name: "_router_updated"
runner:
type: "constant"
times: 1
concurrency: 1
context:
network: {}
users:
tenants: 1
users_per_tenant: 1
quotas:
neutron:
network: -1
subnet: -1
router: -1
sla:
failure_rate:
max: 0
NeutronNetworks.create_and_update_ports:
-
args:
network_create_args: {}
port_create_args: {}
ports_per_network: 5
port_update_args:
admin_state_up: False
device_id: "dummy_id"
device_owner: "dummy_owner"
name: "_port_updated"
runner:
type: "constant"
times: 1
concurrency: 1
context:
network: {}
users:
tenants: 1
users_per_tenant: 1
quotas:
neutron:
network: -1
port: -1
sla:
failure_rate:
max: 0
NeutronNetworks.create_and_delete_networks:
-
args:
network_create_args: {}
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
quotas:
neutron:
network: -1
subnet: -1
sla:
failure_rate:
max: 0
NeutronNetworks.create_and_delete_subnets:
-
args:
network_create_args: {}
subnet_create_args: {}
subnet_cidr_start: "1.1.0.0/30"
subnets_per_network: 2
runner:
type: "constant"
times: 1
concurrency: 1
context:
network: {}
users:
tenants: 1
users_per_tenant: 1
quotas:
neutron:
network: -1
subnet: -1
sla:
failure_rate:
max: 0
NeutronNetworks.create_and_delete_ports:
-
args:
network_create_args: {}
port_create_args: {}
ports_per_network: 10
runner:
type: "constant"
times: 1
concurrency: 1
context:
network: {}
users:
tenants: 1
users_per_tenant: 1
quotas:
neutron:
network: -1
port: -1
sla:
failure_rate:
max: 0
CinderVolumes.create_and_upload_volume_to_image:
-
args:
size: 1
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
CinderVolumes.create_volume_backup:
-
args:
size: 1
do_delete: True
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
CinderVolumes.create_and_restore_volume_backup:
-
args:
size: 1
do_delete: True
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
CinderVolumes.create_and_list_volume_backups:
-
args:
size: 1
detailed: True
do_delete: True
runner:
type: "constant"
times: 2
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
VMTasks.runcommand_heat:
-
args:
workload:
resource: ["rally.plugins.workload", "siege.py"]
username: "fedora"
template: /home/rally/.rally/extra/workload/wordpress_heat_template.yaml
files:
wp-instances.yaml: /home/rally/.rally/extra/workload/wp-instances.yaml
parameters:
wp_instances_count: 2
wp_instance_type: gig
instance_type: gig
wp_image: fedora
image: fedora
network_id: {{external_net_id}}
context:
users:
tenants: 1
users_per_tenant: 1
flavors:
- name: gig
ram: 1024
disk: 4
vcpus: 1
runner:
concurrency: 1
timeout: 3000
times: 1
type: constant
sla:
failure_rate:
max: 100
GlanceImages.create_and_update_image:
-
args:
image_location: "{{ cirros_image_url }}"
container_format: "bare"
disk_format: "qcow2"
runner:
type: "constant"
times: 4
concurrency: 2
context:
users:
tenants: 2
users_per_tenant: 2
api_versions:
glance:
version: 1
sla:
failure_rate:
max: 100

View File

@ -1,146 +0,0 @@
---
MuranoEnvironments.list_environments:
-
runner:
type: "constant"
times: 30
concurrency: 4
context:
users:
tenants: 2
users_per_tenant: 2
murano_environments:
environments_per_tenant: 2
sla:
failure_rate:
max: 0
MuranoEnvironments.create_and_delete_environment:
-
runner:
type: "constant"
times: 20
concurrency: 2
context:
users:
tenants: 2
users_per_tenant: 2
sla:
failure_rate:
max: 0
MuranoEnvironments.create_and_deploy_environment:
-
args:
packages_per_env: 2
runner:
type: "constant"
times: 8
concurrency: 2
context:
users:
tenants: 2
users_per_tenant: 2
murano_packages:
app_package: "~/.rally/extra/murano/applications/HelloReporter/io.murano.apps.HelloReporter.zip"
roles:
- "admin"
sla:
failure_rate:
max: 0
-
args:
packages_per_env: 2
runner:
type: "constant"
times: 8
concurrency: 2
context:
users:
tenants: 2
users_per_tenant: 2
murano_packages:
app_package: "~/.rally/extra/murano/applications/HelloReporter/io.murano.apps.HelloReporter/"
roles:
- "admin"
MuranoPackages.import_and_list_packages:
-
args:
package: "~/.rally/extra/murano/applications/HelloReporter/io.murano.apps.HelloReporter/"
runner:
type: "constant"
times: 10
concurrency: 2
context:
users:
tenants: 2
users_per_tenant: 2
sla:
failure_rate:
max: 0
-
args:
package: "~/.rally/extra/murano/applications/HelloReporter/io.murano.apps.HelloReporter.zip"
runner:
type: "constant"
times: 1
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
MuranoPackages.import_and_delete_package:
-
args:
package: "~/.rally/extra/murano/applications/HelloReporter/io.murano.apps.HelloReporter/"
runner:
type: "constant"
times: 10
concurrency: 2
context:
users:
tenants: 2
users_per_tenant: 2
sla:
failure_rate:
max: 0
MuranoPackages.import_and_filter_applications:
-
args:
package: "~/.rally/extra/murano/applications/HelloReporter/io.murano.apps.HelloReporter/"
filter_query: {"category" : "Web"}
runner:
type: "constant"
times: 10
concurrency: 2
context:
users:
tenants: 2
users_per_tenant: 2
sla:
failure_rate:
max: 0
MuranoPackages.package_lifecycle:
-
args:
package: "~/.rally/extra/murano/applications/HelloReporter/io.murano.apps.HelloReporter/"
body: {"categories": ["Web"]}
operation: "add"
runner:
type: "constant"
times: 10
concurrency: 2
context:
users:
tenants: 2
users_per_tenant: 2
sla:
failure_rate:
max: 0

View File

@ -1,148 +0,0 @@
{% set flavor_name = flavor_name or "m1.tiny" %}
---
NeutronLoadbalancerV2.create_and_list_loadbalancers:
-
args:
lb_create_args: {}
runner:
type: "constant"
times: 2
concurrency: 2
context:
users:
tenants: 2
users_per_tenant: 1
network: {}
sla:
failure_rate:
max: 0
NeutronBGPVPN.create_and_list_bgpvpns:
-
runner:
type: "constant"
times: 8
concurrency: 4
context:
users:
tenants: 2
users_per_tenant: 1
sla:
failure_rate:
max: 0
NeutronBGPVPN.create_and_update_bgpvpns:
-
runner:
type: "constant"
times: 8
concurrency: 4
context:
users:
tenants: 2
users_per_tenant: 1
sla:
failure_rate:
max: 0
NeutronBGPVPN.create_and_delete_bgpvpns:
-
runner:
type: "constant"
times: 8
concurrency: 4
context:
users:
tenants: 2
users_per_tenant: 1
sla:
failure_rate:
max: 0
NeutronBGPVPN.create_bgpvpn_assoc_disassoc_networks:
-
runner:
type: "constant"
times: 8
concurrency: 4
context:
users:
tenants: 2
users_per_tenant: 1
network: {}
servers:
servers_per_tenant: 1
auto_assign_nic: True
flavor:
name: "{{flavor_name}}"
image:
name: "^cirros.*-disk$"
sla:
failure_rate:
max: 0
NeutronBGPVPN.create_bgpvpn_assoc_disassoc_routers:
-
runner:
type: "constant"
times: 8
concurrency: 4
context:
users:
tenants: 2
users_per_tenant: 1
network: {}
servers:
servers_per_tenant: 1
auto_assign_nic: True
flavor:
name: "{{flavor_name}}"
image:
name: "^cirros.*-disk$"
sla:
failure_rate:
max: 0
NeutronBGPVPN.create_and_list_networks_associations:
-
runner:
type: "constant"
times: 8
concurrency: 4
context:
users:
tenants: 2
users_per_tenant: 1
network: {}
servers:
servers_per_tenant: 1
auto_assign_nic: True
flavor:
name: "{{flavor_name}}"
image:
name: "^cirros.*-disk$"
sla:
failure_rate:
max: 0
NeutronBGPVPN.create_and_list_routers_associations:
-
runner:
type: "constant"
times: 8
concurrency: 4
context:
users:
tenants: 2
users_per_tenant: 1
network: {}
servers:
servers_per_tenant: 1
auto_assign_nic: True
flavor:
name: "{{flavor_name}}"
image:
name: "^cirros.*-disk$"
sla:
failure_rate:
max: 0

View File

@ -1,30 +0,0 @@
---
version: 2
title: Task for gate-rally-dsvm-senlin-rally-ubuntu-xenial-nv job
description: >
This task contains various scenarios for testing senlin plugins
subtasks:
-
title: SenlinClusters.create_and_delete_cluster tests
scenario:
SenlinClusters.create_and_delete_cluster:
desired_capacity: 3
min_size: 0
max_size: 5
runner:
constant:
times: 3
concurrency: 2
contexts:
users:
tenants: 2
users_per_tenant: 2
profiles:
type: os.nova.server
version: "1.0"
properties:
name: cirros_server
flavor: 1
image: "cirros-0.3.5-x86_64-disk"
networks:
- network: private

View File

@ -1,24 +0,0 @@
---
version: 2
title: Task for gate-rally-dsvm-zaqar-rally-ubuntu-xenial-nv job
description: >
This task contains various scenarios for testing zaqar plugins
subtasks:
-
title: ZaqarBasic.create_queue test
scenario:
ZaqarBasic.create_queue: {}
runner:
constant:
times: 100
concurrency: 10
-
title: ZaqarBasic.producer_consumer test
scenario:
ZaqarBasic.producer_consumer:
min_msg_count: 50
max_msg_count: 200
runner:
constant:
times: 100
concurrency: 10

View File

@ -1,429 +0,0 @@
{% set image_name = "^cirros.*-disk$" %}
{% set flavor_name = "m1.nano" %}
{% set smoke = 0 %}
---
CeilometerEvents.create_user_and_get_event:
-
runner:
type: "constant"
times: 10
concurrency: 10
context:
users:
tenants: 2
users_per_tenant: 2
sla:
failure_rate:
max: 0
CeilometerEvents.create_user_and_list_event_types:
-
runner:
type: "constant"
times: 10
concurrency: 10
context:
users:
tenants: 2
users_per_tenant: 2
sla:
failure_rate:
max: 0
CeilometerEvents.create_user_and_list_events:
-
runner:
type: "constant"
times: 10
concurrency: 10
context:
users:
tenants: 2
users_per_tenant: 2
sla:
failure_rate:
max: 0
CeilometerTraits.create_user_and_list_trait_descriptions:
-
runner:
type: "constant"
times: 10
concurrency: 10
context:
users:
tenants: 2
users_per_tenant: 2
sla:
failure_rate:
max: 0
CeilometerTraits.create_user_and_list_traits:
-
runner:
type: "constant"
times: 10
concurrency: 10
context:
users:
tenants: 2
users_per_tenant: 2
sla:
failure_rate:
max: 0
CeilometerMeters.list_meters:
-
runner:
type: constant
times: 10
concurrency: 2
context:
users:
tenants: 1
users_per_tenant: 1
ceilometer:
counter_name: "rally_meter"
counter_type: "gauge"
counter_unit: "%"
counter_volume: 100
resources_per_tenant: 1
samples_per_resource: 1
timestamp_interval: 1
sla:
failure_rate:
max: 0
CeilometerResource.list_resources:
-
runner:
type: constant
times: 10
concurrency: 2
context:
users:
tenants: 1
users_per_tenant: 1
ceilometer:
counter_name: "rally_meter"
counter_type: "gauge"
counter_unit: "%"
counter_volume: 100
resources_per_tenant: 1
samples_per_resource: 1
timestamp_interval: 1
sla:
failure_rate:
max: 0
CeilometerSamples.list_samples:
-
runner:
type: "constant"
times: 10
concurrency: 10
context:
users:
tenants: 1
users_per_tenant: 1
ceilometer:
counter_name: "cpu_util"
counter_type: "gauge"
counter_unit: "instance"
counter_volume: 1.0
resources_per_tenant: 3
samples_per_resource: 10
timestamp_interval: 60
metadata_list:
- status: "active"
name: "fake_resource"
deleted: "False"
created_at: "2015-09-04T12:34:19.000000"
- status: "not_active"
name: "fake_resource_1"
deleted: "False"
created_at: "2015-09-10T06:55:12.000000"
batch_size: 5
sla:
failure_rate:
max: 0
CeilometerResource.get_tenant_resources:
-
runner:
type: "constant"
times: 10
concurrency: 5
context:
users:
tenants: 2
users_per_tenant: 2
ceilometer:
counter_name: "cpu_util"
counter_type: "gauge"
counter_volume: 1.0
counter_unit: "instance"
resources_per_tenant: 3
sla:
failure_rate:
max: 0
CeilometerAlarms.create_alarm:
-
args:
meter_name: "ram_util"
threshold: 10.0
type: "threshold"
statistic: "avg"
alarm_actions: ["http://localhost:8776/alarm"]
ok_actions: ["http://localhost:8776/ok"]
insufficient_data_actions: ["http://localhost:8776/notok"]
runner:
type: "constant"
times: 10
concurrency: 10
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
CeilometerAlarms.create_and_delete_alarm:
-
args:
meter_name: "ram_util"
threshold: 10.0
type: "threshold"
statistic: "avg"
alarm_actions: ["http://localhost:8776/alarm"]
ok_actions: ["http://localhost:8776/ok"]
insufficient_data_actions: ["http://localhost:8776/notok"]
runner:
type: "constant"
times: 10
concurrency: 10
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
CeilometerAlarms.create_and_list_alarm:
-
args:
meter_name: "ram_util"
threshold: 10.0
type: "threshold"
statistic: "avg"
alarm_actions: ["http://localhost:8776/alarm"]
ok_actions: ["http://localhost:8776/ok"]
insufficient_data_actions: ["http://localhost:8776/notok"]
runner:
type: "constant"
times: 10
concurrency: 10
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
CeilometerAlarms.create_and_get_alarm:
-
args:
meter_name: "ram_util"
threshold: 10.0
type: "threshold"
statistic: "avg"
alarm_actions: ["http://localhost:8776/alarm"]
ok_actions: ["http://localhost:8776/ok"]
insufficient_data_actions: ["http://localhost:8776/notok"]
runner:
type: "constant"
times: 10
concurrency: 2
context:
users:
tenants: 2
users_per_tenant: 2
sla:
failure_rate:
max: 0
CeilometerAlarms.create_and_update_alarm:
-
args:
meter_name: "ram_util"
threshold: 10.0
type: "threshold"
statistic: "avg"
alarm_actions: ["http://localhost:8776/alarm"]
ok_actions: ["http://localhost:8776/ok"]
insufficient_data_actions: ["http://localhost:8776/notok"]
runner:
type: "constant"
times: 10
concurrency: 10
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
CeilometerAlarms.create_alarm_and_get_history:
-
args:
meter_name: "ram_util"
threshold: 10.0
type: "threshold"
state: "ok"
statistic: "avg"
alarm_actions: ["http://localhost:8776/alarm"]
ok_actions: ["http://localhost:8776/ok"]
insufficient_data_actions: ["http://localhost:8776/notok"]
runner:
type: "constant"
times: 10
concurrency: 5
context:
users:
tenants: 2
users_per_tenant: 2
sla:
failure_rate:
max: 0
CeilometerAlarms.list_alarms:
-
runner:
type: "constant"
times: 10
concurrency: 10
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
CeilometerQueries.create_and_query_alarms:
-
args:
filter: {"and": [{"!=": {"state": "dummy_state"}},{"=": {"type": "threshold"}}]}
orderby: !!null
limit: 10
meter_name: "ram_util"
threshold: 10.0
type: "threshold"
statistic: "avg"
alarm_actions: ["http://localhost:8776/alarm"]
ok_actions: ["http://localhost:8776/ok"]
insufficient_data_actions: ["http://localhost:8776/notok"]
runner:
type: "constant"
times: 20
concurrency: 10
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
CeilometerQueries.create_and_query_alarm_history:
-
args:
orderby: !!null
limit: !!null
meter_name: "ram_util"
threshold: 10.0
type: "threshold"
statistic: "avg"
alarm_actions: ["http://localhost:8776/alarm"]
ok_actions: ["http://localhost:8776/ok"]
insufficient_data_actions: ["http://localhost:8776/notok"]
runner:
type: "constant"
times: 20
concurrency: 10
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0
CeilometerStats.get_stats:
-
runner:
type: constant
times: 10
concurrency: 2
context:
users:
tenants: 2
users_per_tenant: 2
ceilometer:
counter_name: "rally_meter"
counter_type: "gauge"
counter_unit: "%"
counter_volume: 100
resources_per_tenant: 5
samples_per_resource: 5
timestamp_interval: 10
metadata_list:
-
status: "active"
name: "rally on"
deleted: "false"
-
status: "terminated"
name: "rally off"
deleted: "true"
args:
meter_name: "rally_meter"
filter_by_user_id: true
filter_by_project_id: true
filter_by_resource_id: true
metadata_query:
status: "terminated"
period: 300
groupby: "resource_id"
sla:
failure_rate:
max: 0
CeilometerQueries.create_and_query_samples:
-
args:
filter: {"=": {"counter_unit": "instance"}}
orderby: !!null
limit: 10
counter_name: "cpu_util"
counter_type: "gauge"
counter_unit: "instance"
counter_volume: "1.0"
resource_id: "resource_id"
runner:
type: "constant"
times: 20
concurrency: 10
context:
users:
tenants: 1
users_per_tenant: 1
sla:
failure_rate:
max: 0

View File

@ -137,16 +137,6 @@ class _Deployment(APIGroup):
def get(self, deployment): def get(self, deployment):
return self._get(deployment).to_dict() return self._get(deployment).to_dict()
def service_list(self, deployment):
"""Get the services list.
:param deployment: Deployment object
:returns: Service list
"""
# TODO(astudenov): make this method platform independent
admin = deployment.get_credentials_for("openstack")["admin"]
return admin.list_services()
def list(self, status=None, parent_uuid=None, name=None): def list(self, status=None, parent_uuid=None, name=None):
"""Get the deployments list. """Get the deployments list.

View File

@ -14,7 +14,6 @@
# under the License. # under the License.
"""Contains the Rally objects.""" """Contains the Rally objects."""
from rally.common.objects.credential import Credential # noqa
from rally.common.objects.deploy import Deployment # noqa from rally.common.objects.deploy import Deployment # noqa
from rally.common.objects.task import Subtask # noqa from rally.common.objects.task import Subtask # noqa
from rally.common.objects.task import Task # noqa from rally.common.objects.task import Task # noqa

View File

@ -1,35 +0,0 @@
# Copyright 2014: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import logging
from rally.plugins.openstack import credential
LOG = logging.getLogger(__name__)
class Credential(credential.OpenStackCredential):
"""Deprecated version of OpenStackCredential class"""
def __init__(self, *args, **kwargs):
super(Credential, self).__init__(*args, **kwargs)
LOG.warning("Class rally.common.objects.Credential is deprecated "
"since Rally 0.11.0. Use raw dict for OpenStack "
"credentials instead.")
def to_dict(self, include_permission=False):
dct = super(Credential, self).to_dict()
if not include_permission:
dct.pop("permission")
return dct

View File

@ -94,12 +94,6 @@ class Deployment(object):
return self._env return self._env
def __getitem__(self, key): def __getitem__(self, key):
# TODO(astudenov): remove this in future releases
if key == "admin" or key == "users":
LOG.warning("deployment.%s is deprecated in Rally 0.9.0. "
"Use deployment.get_credentials_for('openstack')"
"['%s'] to get credentials." % (key, key))
return self.get_credentials_for("openstack")[key]
if key == "status": if key == "status":
status = self._env.status status = self._env.status
return _STATUS_NEW_TO_OLD.get(status, status) return _STATUS_NEW_TO_OLD.get(status, status)
@ -160,12 +154,16 @@ class Deployment(object):
all_credentials = {} all_credentials = {}
for platform, credentials in self._all_credentials.items(): for platform, credentials in self._all_credentials.items():
if platform == "openstack": if platform == "openstack":
from rally.plugins.openstack import credential try:
from rally_openstack import credential
except ImportError:
all_credentials[platform] = credentials
else:
admin = credentials[0]["admin"] admin = credentials[0]["admin"]
if admin: if admin:
admin = credential.OpenStackCredential( admin = credential.OpenStackCredential(
permission=consts.EndpointPermission.ADMIN, **admin) permission=consts.EndpointPermission.ADMIN,
**admin)
all_credentials[platform] = [{ all_credentials[platform] = [{
"admin": admin, "admin": admin,
"users": [credential.OpenStackCredential(**user) "users": [credential.OpenStackCredential(**user)

View File

@ -16,7 +16,6 @@ import importlib
from rally.common import cfg from rally.common import cfg
from rally.common import logging from rally.common import logging
from rally.plugins.openstack.cfg import opts as openstack_opts
from rally.task import engine from rally.task import engine
CONF = cfg.CONF CONF = cfg.CONF
@ -25,10 +24,6 @@ CONF = cfg.CONF
def list_opts(): def list_opts():
merged_opts = {"DEFAULT": []} merged_opts = {"DEFAULT": []}
for category, options in openstack_opts.list_opts().items():
merged_opts.setdefault(category, [])
merged_opts[category].extend(options)
merged_opts["DEFAULT"].extend(logging.DEBUG_OPTS) merged_opts["DEFAULT"].extend(logging.DEBUG_OPTS)
merged_opts["DEFAULT"].extend(engine.TASK_ENGINE_OPTS) merged_opts["DEFAULT"].extend(engine.TASK_ENGINE_OPTS)

View File

@ -1,9 +0,0 @@
from rally.common import logging
LOG = logging.getLogger(__name__)
LOG.warning("rally.osclients module moved to rally.plugins.openstack.osclients"
"rally.osclients module is going to be removed.")
from rally.plugins.openstack.osclients import * # noqa

View File

@ -22,7 +22,7 @@ from rally.task import types
@plugin.configure(name="path_or_url") @plugin.configure(name="path_or_url")
class PathOrUrl(types.ResourceType, types.DeprecatedBehaviourMixin): class PathOrUrl(types.ResourceType):
"""Check whether file exists or url available.""" """Check whether file exists or url available."""
def pre_process(self, resource_spec, config): def pre_process(self, resource_spec, config):
@ -41,7 +41,7 @@ class PathOrUrl(types.ResourceType, types.DeprecatedBehaviourMixin):
@plugin.configure(name="file") @plugin.configure(name="file")
class FileType(types.ResourceType, types.DeprecatedBehaviourMixin): class FileType(types.ResourceType):
"""Return content of the file by its path.""" """Return content of the file by its path."""
def pre_process(self, resource_spec, config): def pre_process(self, resource_spec, config):
@ -50,7 +50,7 @@ class FileType(types.ResourceType, types.DeprecatedBehaviourMixin):
@plugin.configure(name="expand_user_path") @plugin.configure(name="expand_user_path")
class ExpandUserPath(types.ResourceType, types.DeprecatedBehaviourMixin): class ExpandUserPath(types.ResourceType):
"""Expands user path.""" """Expands user path."""
def pre_process(self, resource_spec, config): def pre_process(self, resource_spec, config):
@ -58,7 +58,7 @@ class ExpandUserPath(types.ResourceType, types.DeprecatedBehaviourMixin):
@plugin.configure(name="file_dict") @plugin.configure(name="file_dict")
class FileTypeDict(types.ResourceType, types.DeprecatedBehaviourMixin): class FileTypeDict(types.ResourceType):
"""Return the dictionary of items with file path and file content.""" """Return the dictionary of items with file path and file content."""
def pre_process(self, resource_spec, config): def pre_process(self, resource_spec, config):

View File

@ -1,51 +0,0 @@
# Copyright 2013: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import cfg
OPTS = {"openstack": [
cfg.FloatOpt("cinder_volume_create_prepoll_delay",
default=2.0,
deprecated_group="benchmark",
help="Time to sleep after creating a resource before"
" polling for it status"),
cfg.FloatOpt("cinder_volume_create_timeout",
default=600.0,
deprecated_group="benchmark",
help="Time to wait for cinder volume to be created."),
cfg.FloatOpt("cinder_volume_create_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Interval between checks when waiting for volume"
" creation."),
cfg.FloatOpt("cinder_volume_delete_timeout",
default=600.0,
deprecated_group="benchmark",
help="Time to wait for cinder volume to be deleted."),
cfg.FloatOpt("cinder_volume_delete_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Interval between checks when waiting for volume"
" deletion."),
cfg.FloatOpt("cinder_backup_restore_timeout",
default=600.0,
deprecated_group="benchmark",
help="Time to wait for cinder backup to be restored."),
cfg.FloatOpt("cinder_backup_restore_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Interval between checks when waiting for backup"
" restoring."),
]}

View File

@ -1,27 +0,0 @@
# Copyright 2013: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import cfg
OPTS = {"openstack": [
cfg.IntOpt("resource_deletion_timeout",
default=600,
deprecated_group="cleanup",
help="A timeout in seconds for deleting resources"),
cfg.IntOpt("cleanup_threads",
default=20,
deprecated_group="cleanup",
help="Number of cleanup threads to run")
]}

View File

@ -1,37 +0,0 @@
# Copyright 2013: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import cfg
OPTS = {"openstack": [
cfg.FloatOpt(
"ec2_server_boot_prepoll_delay",
default=1.0,
deprecated_group="benchmark",
help="Time to sleep after boot before polling for status"
),
cfg.FloatOpt(
"ec2_server_boot_timeout",
default=300.0,
deprecated_group="benchmark",
help="Server boot timeout"
),
cfg.FloatOpt(
"ec2_server_boot_poll_interval",
default=1.0,
deprecated_group="benchmark",
help="Server boot poll interval"
)
]}

View File

@ -1,52 +0,0 @@
# Copyright 2013: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import cfg
OPTS = {"openstack": [
cfg.FloatOpt("glance_image_delete_timeout",
default=120.0,
deprecated_group="benchmark",
help="Time to wait for glance image to be deleted."),
cfg.FloatOpt("glance_image_delete_poll_interval",
default=1.0,
deprecated_group="benchmark",
help="Interval between checks when waiting for image "
"deletion."),
cfg.FloatOpt("glance_image_create_prepoll_delay",
default=2.0,
deprecated_group="benchmark",
help="Time to sleep after creating a resource before "
"polling for it status"),
cfg.FloatOpt("glance_image_create_timeout",
default=120.0,
deprecated_group="benchmark",
help="Time to wait for glance image to be created."),
cfg.FloatOpt("glance_image_create_poll_interval",
default=1.0,
deprecated_group="benchmark",
help="Interval between checks when waiting for image "
"creation."),
cfg.FloatOpt("glance_image_create_prepoll_delay",
default=2.0,
deprecated_group="benchmark",
help="Time to sleep after creating a resource before "
"polling for it status"),
cfg.FloatOpt("glance_image_create_poll_interval",
default=1.0,
deprecated_group="benchmark",
help="Interval between checks when waiting for image "
"creation.")
]}

View File

@ -1,112 +0,0 @@
# Copyright 2013: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import cfg
OPTS = {"openstack": [
cfg.FloatOpt("heat_stack_create_prepoll_delay",
default=2.0,
deprecated_group="benchmark",
help="Time(in sec) to sleep after creating a resource before "
"polling for it status."),
cfg.FloatOpt("heat_stack_create_timeout",
default=3600.0,
deprecated_group="benchmark",
help="Time(in sec) to wait for heat stack to be created."),
cfg.FloatOpt("heat_stack_create_poll_interval",
default=1.0,
deprecated_group="benchmark",
help="Time interval(in sec) between checks when waiting for "
"stack creation."),
cfg.FloatOpt("heat_stack_delete_timeout",
default=3600.0,
deprecated_group="benchmark",
help="Time(in sec) to wait for heat stack to be deleted."),
cfg.FloatOpt("heat_stack_delete_poll_interval",
default=1.0,
deprecated_group="benchmark",
help="Time interval(in sec) between checks when waiting for "
"stack deletion."),
cfg.FloatOpt("heat_stack_check_timeout",
default=3600.0,
deprecated_group="benchmark",
help="Time(in sec) to wait for stack to be checked."),
cfg.FloatOpt("heat_stack_check_poll_interval",
default=1.0,
deprecated_group="benchmark",
help="Time interval(in sec) between checks when waiting for "
"stack checking."),
cfg.FloatOpt("heat_stack_update_prepoll_delay",
default=2.0,
deprecated_group="benchmark",
help="Time(in sec) to sleep after updating a resource before "
"polling for it status."),
cfg.FloatOpt("heat_stack_update_timeout",
default=3600.0,
deprecated_group="benchmark",
help="Time(in sec) to wait for stack to be updated."),
cfg.FloatOpt("heat_stack_update_poll_interval",
default=1.0,
deprecated_group="benchmark",
help="Time interval(in sec) between checks when waiting for "
"stack update."),
cfg.FloatOpt("heat_stack_suspend_timeout",
default=3600.0,
deprecated_group="benchmark",
help="Time(in sec) to wait for stack to be suspended."),
cfg.FloatOpt("heat_stack_suspend_poll_interval",
default=1.0,
deprecated_group="benchmark",
help="Time interval(in sec) between checks when waiting for "
"stack suspend."),
cfg.FloatOpt("heat_stack_resume_timeout",
default=3600.0,
deprecated_group="benchmark",
help="Time(in sec) to wait for stack to be resumed."),
cfg.FloatOpt("heat_stack_resume_poll_interval",
default=1.0,
deprecated_group="benchmark",
help="Time interval(in sec) between checks when waiting for "
"stack resume."),
cfg.FloatOpt("heat_stack_snapshot_timeout",
default=3600.0,
deprecated_group="benchmark",
help="Time(in sec) to wait for stack snapshot to "
"be created."),
cfg.FloatOpt("heat_stack_snapshot_poll_interval",
default=1.0,
deprecated_group="benchmark",
help="Time interval(in sec) between checks when waiting for "
"stack snapshot to be created."),
cfg.FloatOpt("heat_stack_restore_timeout",
default=3600.0,
deprecated_group="benchmark",
help="Time(in sec) to wait for stack to be restored from "
"snapshot."),
cfg.FloatOpt("heat_stack_restore_poll_interval",
default=1.0,
deprecated_group="benchmark",
help="Time interval(in sec) between checks when waiting for "
"stack to be restored."),
cfg.FloatOpt("heat_stack_scale_timeout",
default=3600.0,
deprecated_group="benchmark",
help="Time (in sec) to wait for stack to scale up or down."),
cfg.FloatOpt("heat_stack_scale_poll_interval",
default=1.0,
deprecated_group="benchmark",
help="Time interval (in sec) between checks when waiting for "
"a stack to scale up or down.")
]}

View File

@ -1,36 +0,0 @@
# Copyright 2013: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import cfg
OPTS = {"openstack": [
cfg.FloatOpt("ironic_node_create_poll_interval",
default=1.0,
deprecated_group="benchmark",
help="Interval(in sec) between checks when waiting for node "
"creation."),
cfg.FloatOpt("ironic_node_create_timeout",
default=300,
deprecated_group="benchmark",
help="Ironic node create timeout"),
cfg.FloatOpt("ironic_node_poll_interval",
default=1.0,
deprecated_group="benchmark",
help="Ironic node poll interval"),
cfg.FloatOpt("ironic_node_delete_timeout",
default=300,
deprecated_group="benchmark",
help="Ironic node create timeout")
]}

View File

@ -1,25 +0,0 @@
# Copyright 2013: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import cfg
OPTS = {"openstack": [
cfg.IntOpt("roles_context_resource_management_workers",
default=30,
deprecated_name="resource_management_workers",
deprecated_group="roles_context",
help="How many concurrent threads to use for serving roles "
"context"),
]}

View File

@ -1,39 +0,0 @@
# Copyright 2013: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import cfg
OPTS = {"openstack": [
cfg.IntOpt("users_context_resource_management_workers",
default=20,
deprecated_name="resource_management_workers",
deprecated_group="users_context",
help="The number of concurrent threads to use for serving "
"users context."),
cfg.StrOpt("project_domain",
default="default",
deprecated_group="users_context",
help="ID of domain in which projects will be created."),
cfg.StrOpt("user_domain",
default="default",
deprecated_group="users_context",
help="ID of domain in which users will be created."),
cfg.StrOpt("keystone_default_role",
default="member",
deprecated_group="users_context",
help="The default role name of the keystone to assign to "
"users.")
]}

View File

@ -1,52 +0,0 @@
# Copyright 2013: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import cfg
OPTS = {"openstack": [
cfg.FloatOpt("magnum_cluster_create_prepoll_delay",
default=5.0,
deprecated_group="benchmark",
help="Time(in sec) to sleep after creating a resource before "
"polling for the status."),
cfg.FloatOpt("magnum_cluster_create_timeout",
default=2400.0,
deprecated_group="benchmark",
help="Time(in sec) to wait for magnum cluster to be "
"created."),
cfg.FloatOpt("magnum_cluster_create_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Time interval(in sec) between checks when waiting for "
"cluster creation."),
cfg.FloatOpt("k8s_pod_create_timeout",
default=1200.0,
deprecated_group="benchmark",
help="Time(in sec) to wait for k8s pod to be created."),
cfg.FloatOpt("k8s_pod_create_poll_interval",
default=1.0,
deprecated_group="benchmark",
help="Time interval(in sec) between checks when waiting for "
"k8s pod creation."),
cfg.FloatOpt("k8s_rc_create_timeout",
default=1200.0,
deprecated_group="benchmark",
help="Time(in sec) to wait for k8s rc to be created."),
cfg.FloatOpt("k8s_rc_create_poll_interval",
default=1.0,
deprecated_group="benchmark",
help="Time interval(in sec) between checks when waiting for "
"k8s rc creation.")
]}

View File

@ -1,69 +0,0 @@
# Copyright 2013: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import cfg
OPTS = {"openstack": [
cfg.FloatOpt(
"manila_share_create_prepoll_delay",
default=2.0,
deprecated_group="benchmark",
help="Delay between creating Manila share and polling for its "
"status."),
cfg.FloatOpt(
"manila_share_create_timeout",
default=300.0,
deprecated_group="benchmark",
help="Timeout for Manila share creation."),
cfg.FloatOpt(
"manila_share_create_poll_interval",
default=3.0,
deprecated_group="benchmark",
help="Interval between checks when waiting for Manila share "
"creation."),
cfg.FloatOpt(
"manila_share_delete_timeout",
default=180.0,
deprecated_group="benchmark",
help="Timeout for Manila share deletion."),
cfg.FloatOpt(
"manila_share_delete_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Interval between checks when waiting for Manila share "
"deletion."),
cfg.FloatOpt(
"manila_access_create_timeout",
default=300.0,
deprecated_group="benchmark",
help="Timeout for Manila access creation."),
cfg.FloatOpt(
"manila_access_create_poll_interval",
default=3.0,
deprecated_group="benchmark",
help="Interval between checks when waiting for Manila access "
"creation."),
cfg.FloatOpt(
"manila_access_delete_timeout",
default=180.0,
deprecated_group="benchmark",
help="Timeout for Manila access deletion."),
cfg.FloatOpt(
"manila_access_delete_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Interval between checks when waiting for Manila access "
"deletion."),
]}

View File

@ -1,23 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import cfg
OPTS = {"openstack": [
cfg.IntOpt(
"mistral_execution_timeout",
default=200,
deprecated_group="benchmark",
help="mistral execution timeout")
]}

View File

@ -1,24 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import cfg
OPTS = {"openstack": [
cfg.FloatOpt(
"monasca_metric_create_prepoll_delay",
default=15.0,
deprecated_group="benchmark",
help="Delay between creating Monasca metrics and polling for "
"its elements.")
]}

View File

@ -1,29 +0,0 @@
# Copyright 2013: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import cfg
OPTS = {"openstack": [
cfg.IntOpt("murano_deploy_environment_timeout",
default=1200,
deprecated_name="deploy_environment_timeout",
deprecated_group="benchmark",
help="A timeout in seconds for an environment deploy"),
cfg.IntOpt("murano_deploy_environment_check_interval",
default=5,
deprecated_name="deploy_environment_check_interval",
deprecated_group="benchmark",
help="Deploy environment check interval in seconds"),
]}

View File

@ -1,32 +0,0 @@
# Copyright 2013: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import cfg
OPTS = {"openstack": [
cfg.FloatOpt("neutron_create_loadbalancer_timeout",
default=float(500),
deprecated_group="benchmark",
help="Neutron create loadbalancer timeout"),
cfg.FloatOpt("neutron_create_loadbalancer_poll_interval",
default=float(2),
deprecated_group="benchmark",
help="Neutron create loadbalancer poll interval"),
cfg.BoolOpt("pre_newton_neutron",
default=False,
help="Whether Neutron API is older then OpenStack Newton or "
"not. Based in this option, some external fields for "
"identifying resources can be applied.")
]}

View File

@ -1,308 +0,0 @@
# Copyright 2013: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import cfg
OPTS = {"openstack": [
# prepoll delay, timeout, poll interval
# "start": (0, 300, 1)
cfg.FloatOpt("nova_server_start_prepoll_delay",
default=0.0,
deprecated_group="benchmark",
help="Time to sleep after start before polling for status"),
cfg.FloatOpt("nova_server_start_timeout",
default=300.0,
deprecated_group="benchmark",
help="Server start timeout"),
cfg.FloatOpt("nova_server_start_poll_interval",
deprecated_group="benchmark",
default=1.0,
help="Server start poll interval"),
# "stop": (0, 300, 2)
cfg.FloatOpt("nova_server_stop_prepoll_delay",
default=0.0,
help="Time to sleep after stop before polling for status"),
cfg.FloatOpt("nova_server_stop_timeout",
default=300.0,
deprecated_group="benchmark",
help="Server stop timeout"),
cfg.FloatOpt("nova_server_stop_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Server stop poll interval"),
# "boot": (1, 300, 1)
cfg.FloatOpt("nova_server_boot_prepoll_delay",
default=1.0,
deprecated_group="benchmark",
help="Time to sleep after boot before polling for status"),
cfg.FloatOpt("nova_server_boot_timeout",
default=300.0,
deprecated_group="benchmark",
help="Server boot timeout"),
cfg.FloatOpt("nova_server_boot_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Server boot poll interval"),
# "delete": (2, 300, 2)
cfg.FloatOpt("nova_server_delete_prepoll_delay",
default=2.0,
deprecated_group="benchmark",
help="Time to sleep after delete before polling for status"),
cfg.FloatOpt("nova_server_delete_timeout",
default=300.0,
deprecated_group="benchmark",
help="Server delete timeout"),
cfg.FloatOpt("nova_server_delete_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Server delete poll interval"),
# "reboot": (2, 300, 2)
cfg.FloatOpt("nova_server_reboot_prepoll_delay",
default=2.0,
deprecated_group="benchmark",
help="Time to sleep after reboot before polling for status"),
cfg.FloatOpt("nova_server_reboot_timeout",
default=300.0,
deprecated_group="benchmark",
help="Server reboot timeout"),
cfg.FloatOpt("nova_server_reboot_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Server reboot poll interval"),
# "rebuild": (1, 300, 1)
cfg.FloatOpt("nova_server_rebuild_prepoll_delay",
default=1.0,
deprecated_group="benchmark",
help="Time to sleep after rebuild before polling for status"),
cfg.FloatOpt("nova_server_rebuild_timeout",
default=300.0,
deprecated_group="benchmark",
help="Server rebuild timeout"),
cfg.FloatOpt("nova_server_rebuild_poll_interval",
default=1.0,
deprecated_group="benchmark",
help="Server rebuild poll interval"),
# "rescue": (2, 300, 2)
cfg.FloatOpt("nova_server_rescue_prepoll_delay",
default=2.0,
deprecated_group="benchmark",
help="Time to sleep after rescue before polling for status"),
cfg.FloatOpt("nova_server_rescue_timeout",
default=300.0,
deprecated_group="benchmark",
help="Server rescue timeout"),
cfg.FloatOpt("nova_server_rescue_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Server rescue poll interval"),
# "unrescue": (2, 300, 2)
cfg.FloatOpt("nova_server_unrescue_prepoll_delay",
default=2.0,
deprecated_group="benchmark",
help="Time to sleep after unrescue "
"before polling for status"),
cfg.FloatOpt("nova_server_unrescue_timeout",
default=300.0,
deprecated_group="benchmark",
help="Server unrescue timeout"),
cfg.FloatOpt("nova_server_unrescue_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Server unrescue poll interval"),
# "suspend": (2, 300, 2)
cfg.FloatOpt("nova_server_suspend_prepoll_delay",
default=2.0,
deprecated_group="benchmark",
help="Time to sleep after suspend before polling for status"),
cfg.FloatOpt("nova_server_suspend_timeout",
default=300.0,
deprecated_group="benchmark",
help="Server suspend timeout"),
cfg.FloatOpt("nova_server_suspend_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Server suspend poll interval"),
# "resume": (2, 300, 2)
cfg.FloatOpt("nova_server_resume_prepoll_delay",
default=2.0,
deprecated_group="benchmark",
help="Time to sleep after resume before polling for status"),
cfg.FloatOpt("nova_server_resume_timeout",
default=300.0,
deprecated_group="benchmark",
help="Server resume timeout"),
cfg.FloatOpt("nova_server_resume_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Server resume poll interval"),
# "pause": (2, 300, 2)
cfg.FloatOpt("nova_server_pause_prepoll_delay",
default=2.0,
deprecated_group="benchmark",
help="Time to sleep after pause before polling for status"),
cfg.FloatOpt("nova_server_pause_timeout",
default=300.0,
deprecated_group="benchmark",
help="Server pause timeout"),
cfg.FloatOpt("nova_server_pause_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Server pause poll interval"),
# "unpause": (2, 300, 2)
cfg.FloatOpt("nova_server_unpause_prepoll_delay",
default=2.0,
deprecated_group="benchmark",
help="Time to sleep after unpause before polling for status"),
cfg.FloatOpt("nova_server_unpause_timeout",
default=300.0,
deprecated_group="benchmark",
help="Server unpause timeout"),
cfg.FloatOpt("nova_server_unpause_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Server unpause poll interval"),
# "shelve": (2, 300, 2)
cfg.FloatOpt("nova_server_shelve_prepoll_delay",
default=2.0,
deprecated_group="benchmark",
help="Time to sleep after shelve before polling for status"),
cfg.FloatOpt("nova_server_shelve_timeout",
default=300.0,
deprecated_group="benchmark",
help="Server shelve timeout"),
cfg.FloatOpt("nova_server_shelve_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Server shelve poll interval"),
# "unshelve": (2, 300, 2)
cfg.FloatOpt("nova_server_unshelve_prepoll_delay",
default=2.0,
deprecated_group="benchmark",
help="Time to sleep after unshelve before "
"polling for status"),
cfg.FloatOpt("nova_server_unshelve_timeout",
default=300.0,
deprecated_group="benchmark",
help="Server unshelve timeout"),
cfg.FloatOpt("nova_server_unshelve_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Server unshelve poll interval"),
# "image_create": (0, 300, 2)
cfg.FloatOpt("nova_server_image_create_prepoll_delay",
default=0.0,
deprecated_group="benchmark",
help="Time to sleep after image_create before polling"
" for status"),
cfg.FloatOpt("nova_server_image_create_timeout",
default=300.0,
deprecated_group="benchmark",
help="Server image_create timeout"),
cfg.FloatOpt("nova_server_image_create_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Server image_create poll interval"),
# "image_delete": (0, 300, 2)
cfg.FloatOpt("nova_server_image_delete_prepoll_delay",
default=0.0,
deprecated_group="benchmark",
help="Time to sleep after image_delete before polling"
" for status"),
cfg.FloatOpt("nova_server_image_delete_timeout",
default=300.0,
deprecated_group="benchmark",
help="Server image_delete timeout"),
cfg.FloatOpt("nova_server_image_delete_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Server image_delete poll interval"),
# "resize": (2, 400, 5)
cfg.FloatOpt("nova_server_resize_prepoll_delay",
default=2.0,
deprecated_group="benchmark",
help="Time to sleep after resize before polling for status"),
cfg.FloatOpt("nova_server_resize_timeout",
default=400.0,
deprecated_group="benchmark",
help="Server resize timeout"),
cfg.FloatOpt("nova_server_resize_poll_interval",
default=5.0,
deprecated_group="benchmark",
help="Server resize poll interval"),
# "resize_confirm": (0, 200, 2)
cfg.FloatOpt("nova_server_resize_confirm_prepoll_delay",
default=0.0,
deprecated_group="benchmark",
help="Time to sleep after resize_confirm before polling"
" for status"),
cfg.FloatOpt("nova_server_resize_confirm_timeout",
default=200.0,
deprecated_group="benchmark",
help="Server resize_confirm timeout"),
cfg.FloatOpt("nova_server_resize_confirm_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Server resize_confirm poll interval"),
# "resize_revert": (0, 200, 2)
cfg.FloatOpt("nova_server_resize_revert_prepoll_delay",
default=0.0,
deprecated_group="benchmark",
help="Time to sleep after resize_revert before polling"
" for status"),
cfg.FloatOpt("nova_server_resize_revert_timeout",
default=200.0,
deprecated_group="benchmark",
help="Server resize_revert timeout"),
cfg.FloatOpt("nova_server_resize_revert_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Server resize_revert poll interval"),
# "live_migrate": (1, 400, 2)
cfg.FloatOpt("nova_server_live_migrate_prepoll_delay",
default=1.0,
deprecated_group="benchmark",
help="Time to sleep after live_migrate before polling"
" for status"),
cfg.FloatOpt("nova_server_live_migrate_timeout",
default=400.0,
deprecated_group="benchmark",
help="Server live_migrate timeout"),
cfg.FloatOpt("nova_server_live_migrate_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Server live_migrate poll interval"),
# "migrate": (1, 400, 2)
cfg.FloatOpt("nova_server_migrate_prepoll_delay",
default=1.0,
deprecated_group="benchmark",
help="Time to sleep after migrate before polling for status"),
cfg.FloatOpt("nova_server_migrate_timeout",
default=400.0,
deprecated_group="benchmark",
help="Server migrate timeout"),
cfg.FloatOpt("nova_server_migrate_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Server migrate poll interval"),
# "detach":
cfg.FloatOpt("nova_detach_volume_timeout",
default=200.0,
deprecated_group="benchmark",
help="Nova volume detach timeout"),
cfg.FloatOpt("nova_detach_volume_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Nova volume detach poll interval")
]}

View File

@ -1,55 +0,0 @@
# Copyright 2013: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.plugins.openstack.cfg import cinder
from rally.plugins.openstack.cfg import ec2
from rally.plugins.openstack.cfg import glance
from rally.plugins.openstack.cfg import heat
from rally.plugins.openstack.cfg import ironic
from rally.plugins.openstack.cfg import magnum
from rally.plugins.openstack.cfg import manila
from rally.plugins.openstack.cfg import mistral
from rally.plugins.openstack.cfg import monasca
from rally.plugins.openstack.cfg import murano
from rally.plugins.openstack.cfg import neutron
from rally.plugins.openstack.cfg import nova
from rally.plugins.openstack.cfg import osclients
from rally.plugins.openstack.cfg import profiler
from rally.plugins.openstack.cfg import sahara
from rally.plugins.openstack.cfg import senlin
from rally.plugins.openstack.cfg import vm
from rally.plugins.openstack.cfg import watcher
from rally.plugins.openstack.cfg import tempest
from rally.plugins.openstack.cfg import keystone_roles
from rally.plugins.openstack.cfg import keystone_users
from rally.plugins.openstack.cfg import cleanup
def list_opts():
opts = {}
for l_opts in (cinder.OPTS, ec2.OPTS, heat.OPTS, ironic.OPTS, magnum.OPTS,
manila.OPTS, mistral.OPTS, monasca.OPTS, murano.OPTS,
nova.OPTS, osclients.OPTS, profiler.OPTS, sahara.OPTS,
vm.OPTS, glance.OPTS, watcher.OPTS, tempest.OPTS,
keystone_roles.OPTS, keystone_users.OPTS, cleanup.OPTS,
senlin.OPTS, neutron.OPTS):
for category, opt in l_opts.items():
opts.setdefault(category, [])
opts[category].extend(opt)
return opts

View File

@ -1,26 +0,0 @@
# Copyright 2017: GoDaddy Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import cfg
OPTS = {
"DEFAULT": [
cfg.FloatOpt(
"openstack_client_http_timeout",
default=180.0,
help="HTTP timeout for any of OpenStack service in seconds")
]
}

View File

@ -1,23 +0,0 @@
# Copyright 2017: Inria.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import cfg
OPTS = {"openstack": [
cfg.BoolOpt("enable_profiler",
default=True,
deprecated_group="benchmark",
help="Enable or disable osprofiler to trace the scenarios")
]}

View File

@ -1,43 +0,0 @@
# Copyright 2013: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import cfg
OPTS = {"openstack": [
cfg.IntOpt("sahara_cluster_create_timeout",
default=1800,
deprecated_group="benchmark",
help="A timeout in seconds for a cluster create operation"),
cfg.IntOpt("sahara_cluster_delete_timeout",
default=900,
deprecated_group="benchmark",
help="A timeout in seconds for a cluster delete operation"),
cfg.IntOpt("sahara_cluster_check_interval",
default=5,
deprecated_group="benchmark",
help="Cluster status polling interval in seconds"),
cfg.IntOpt("sahara_job_execution_timeout",
default=600,
deprecated_group="benchmark",
help="A timeout in seconds for a Job Execution to complete"),
cfg.IntOpt("sahara_job_check_interval",
default=5,
deprecated_group="benchmark",
help="Job Execution status polling interval in seconds"),
cfg.IntOpt("sahara_workers_per_proxy",
default=20,
deprecated_group="benchmark",
help="Amount of workers one proxy should serve to.")
]}

View File

@ -1,23 +0,0 @@
# Copyright 2013: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import cfg
OPTS = {"openstack": [
cfg.FloatOpt("senlin_action_timeout",
default=3600,
deprecated_group="benchmark",
help="Time in seconds to wait for senlin action to finish.")
]}

View File

@ -1,74 +0,0 @@
# Copyright 2013: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import cfg
OPTS = {"openstack": [
cfg.StrOpt("img_url",
default="http://download.cirros-cloud.net/"
"0.3.5/cirros-0.3.5-x86_64-disk.img",
deprecated_group="tempest",
help="image URL"),
cfg.StrOpt("img_disk_format",
default="qcow2",
deprecated_group="tempest",
help="Image disk format to use when creating the image"),
cfg.StrOpt("img_container_format",
default="bare",
deprecated_group="tempest",
help="Image container format to use when creating the image"),
cfg.StrOpt("img_name_regex",
default="^.*(cirros|testvm).*$",
deprecated_group="tempest",
help="Regular expression for name of a public image to "
"discover it in the cloud and use it for the tests. "
"Note that when Rally is searching for the image, case "
"insensitive matching is performed. Specify nothing "
"('img_name_regex =') if you want to disable discovering. "
"In this case Rally will create needed resources by "
"itself if the values for the corresponding config "
"options are not specified in the Tempest config file"),
cfg.StrOpt("swift_operator_role",
default="Member",
deprecated_group="tempest",
help="Role required for users "
"to be able to create Swift containers"),
cfg.StrOpt("swift_reseller_admin_role",
default="ResellerAdmin",
deprecated_group="tempest",
help="User role that has reseller admin"),
cfg.StrOpt("heat_stack_owner_role",
default="heat_stack_owner",
deprecated_group="tempest",
help="Role required for users "
"to be able to manage Heat stacks"),
cfg.StrOpt("heat_stack_user_role",
default="heat_stack_user",
deprecated_group="tempest",
help="Role for Heat template-defined users"),
cfg.IntOpt("flavor_ref_ram",
default="64",
deprecated_group="tempest",
help="Primary flavor RAM size used by most of the test cases"),
cfg.IntOpt("flavor_ref_alt_ram",
default="128",
deprecated_group="tempest",
help="Alternate reference flavor RAM size used by test that"
"need two flavors, like those that resize an instance"),
cfg.IntOpt("heat_instance_type_ram",
default="64",
deprecated_group="tempest",
help="RAM size flavor used for orchestration test cases")
]}

View File

@ -1,27 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import cfg
OPTS = {"openstack": [
cfg.FloatOpt("vm_ping_poll_interval",
default=1.0,
deprecated_group="benchmark",
help="Interval between checks when waiting for a VM to "
"become pingable"),
cfg.FloatOpt("vm_ping_timeout",
default=120.0,
deprecated_group="benchmark",
help="Time to wait for a VM to become pingable")
]}

View File

@ -1,26 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import cfg
OPTS = {"openstack": [
cfg.FloatOpt("watcher_audit_launch_poll_interval",
default=2.0,
deprecated_group="benchmark",
help="Watcher audit launch interval"),
cfg.IntOpt("watcher_audit_launch_timeout",
default=300,
deprecated_group="benchmark",
help="Watcher audit launch timeout")
]}

View File

@ -1,133 +0,0 @@
# Copyright 2014: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import cfg
from rally.task import utils
CONF = cfg.CONF
cleanup_group = cfg.OptGroup(name="cleanup", title="Cleanup Options")
# NOTE(andreykurilin): There are cases when there is no way to use any kind
# of "name" for resource as an identifier of alignment resource to the
# particular task run and even to Rally itself. Previously, we used empty
# strings as a workaround for name matching specific templates, but
# theoretically such behaviour can hide other cases when resource should have
# a name property, but it is missed.
# Let's use instances of specific class to return as a name of resources
# which do not have names at all.
class NoName(object):
def __init__(self, resource_type):
self.resource_type = resource_type
def __repr__(self):
return "<NoName %s resource>" % self.resource_type
def resource(service, resource, order=0, admin_required=False,
perform_for_admin_only=False, tenant_resource=False,
max_attempts=3, timeout=CONF.openstack.resource_deletion_timeout,
interval=1, threads=CONF.openstack.cleanup_threads):
"""Decorator that overrides resource specification.
Just put it on top of your resource class and specify arguments that you
need.
:param service: It is equal to client name for corresponding service.
E.g. "nova", "cinder" or "zaqar"
:param resource: Client manager name for resource. E.g. in case of
nova.servers you should write here "servers"
:param order: Used to adjust priority of cleanup for different resource
types
:param admin_required: Admin user is required
:param perform_for_admin_only: Perform cleanup for admin user only
:param tenant_resource: Perform deletion only 1 time per tenant
:param max_attempts: Max amount of attempts to delete single resource
:param timeout: Max duration of deletion in seconds
:param interval: Resource status pooling interval
:param threads: Amount of threads (workers) that are deleting resources
simultaneously
"""
def inner(cls):
# TODO(boris-42): This can be written better I believe =)
cls._service = service
cls._resource = resource
cls._order = order
cls._admin_required = admin_required
cls._perform_for_admin_only = perform_for_admin_only
cls._max_attempts = max_attempts
cls._timeout = timeout
cls._interval = interval
cls._threads = threads
cls._tenant_resource = tenant_resource
return cls
return inner
@resource(service=None, resource=None)
class ResourceManager(object):
"""Base class for cleanup plugins for specific resources.
You should use @resource decorator to specify major configuration of
resource manager. Usually you should specify: service, resource and order.
If project python client is very specific, you can override delete(),
list() and is_deleted() methods to make them fit to your case.
"""
def __init__(self, resource=None, admin=None, user=None, tenant_uuid=None):
self.admin = admin
self.user = user
self.raw_resource = resource
self.tenant_uuid = tenant_uuid
def _manager(self):
client = self._admin_required and self.admin or self.user
return getattr(getattr(client, self._service)(), self._resource)
def id(self):
"""Returns id of resource."""
return self.raw_resource.id
def name(self):
"""Returns name of resource."""
return self.raw_resource.name
def is_deleted(self):
"""Checks if the resource is deleted.
Fetch resource by id from service and check it status.
In case of NotFound or status is DELETED or DELETE_COMPLETE returns
True, otherwise False.
"""
try:
resource = self._manager().get(self.id())
except Exception as e:
return getattr(e, "code", getattr(e, "http_status", 400)) == 404
return utils.get_status(resource) in ("DELETED", "DELETE_COMPLETE")
def delete(self):
"""Delete resource that corresponds to instance of this class."""
self._manager().delete(self.id())
def list(self):
"""List all resources specific for admin or user."""
return self._manager().list()

View File

@ -1,285 +0,0 @@
# Copyright 2014: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import time
from rally.common import broker
from rally.common import logging
from rally.common.plugin import discover
from rally.common.plugin import plugin
from rally.common import utils as rutils
from rally.plugins.openstack.cleanup import base
LOG = logging.getLogger(__name__)
class SeekAndDestroy(object):
def __init__(self, manager_cls, admin, users, api_versions=None,
resource_classes=None, task_id=None):
"""Resource deletion class.
This class contains method exterminate() that finds and deletes
all resources created by Rally.
:param manager_cls: subclass of base.ResourceManager
:param admin: admin credential like in context["admin"]
:param users: users credentials like in context["users"]
:param api_versions: dict of client API versions
:param resource_classes: Resource classes to match resource names
against
:param task_id: The UUID of task to match resource names against
"""
self.manager_cls = manager_cls
self.admin = admin
self.users = users or []
self.api_versions = api_versions
self.resource_classes = resource_classes or [
rutils.RandomNameGeneratorMixin]
self.task_id = task_id
def _get_cached_client(self, user):
"""Simplifies initialization and caching OpenStack clients."""
if not user:
return None
# NOTE(astudenov): Credential now supports caching by default
return user["credential"].clients(api_info=self.api_versions)
def _delete_single_resource(self, resource):
"""Safe resource deletion with retries and timeouts.
Send request to delete resource, in case of failures repeat it few
times. After that pull status of resource until it's deleted.
Writes in LOG warning with UUID of resource that wasn't deleted
:param resource: instance of resource manager initiated with resource
that should be deleted.
"""
msg_kw = {
"uuid": resource.id(),
"name": resource.name() or "",
"service": resource._service,
"resource": resource._resource
}
LOG.debug(
"Deleting %(service)s.%(resource)s object %(name)s (%(uuid)s)"
% msg_kw)
try:
rutils.retry(resource._max_attempts, resource.delete)
except Exception as e:
msg = ("Resource deletion failed, max retries exceeded for "
"%(service)s.%(resource)s: %(uuid)s.") % msg_kw
if logging.is_debug():
LOG.exception(msg)
else:
LOG.warning("%(msg)s Reason: %(e)s" % {"msg": msg, "e": e})
else:
started = time.time()
failures_count = 0
while time.time() - started < resource._timeout:
try:
if resource.is_deleted():
return
except Exception as e:
LOG.exception(
"Seems like %s.%s.is_deleted(self) method is broken "
"It shouldn't raise any exceptions."
% (resource.__module__, type(resource).__name__))
# NOTE(boris-42): Avoid LOG spamming in case of bad
# is_deleted() method
failures_count += 1
if failures_count > resource._max_attempts:
break
finally:
rutils.interruptable_sleep(resource._interval)
LOG.warning("Resource deletion failed, timeout occurred for "
"%(service)s.%(resource)s: %(uuid)s." % msg_kw)
def _publisher(self, queue):
"""Publisher for deletion jobs.
This method iterates over all users, lists all resources
(using manager_cls) and puts jobs for deletion.
Every deletion job contains tuple with two values: user and resource
uuid that should be deleted.
In case of tenant based resource, uuids are fetched only from one user
per tenant.
"""
def _publish(admin, user, manager):
try:
for raw_resource in rutils.retry(3, manager.list):
queue.append((admin, user, raw_resource))
except Exception:
LOG.exception(
"Seems like %s.%s.list(self) method is broken. "
"It shouldn't raise any exceptions."
% (manager.__module__, type(manager).__name__))
if self.admin and (not self.users
or self.manager_cls._perform_for_admin_only):
manager = self.manager_cls(
admin=self._get_cached_client(self.admin))
_publish(self.admin, None, manager)
else:
visited_tenants = set()
admin_client = self._get_cached_client(self.admin)
for user in self.users:
if (self.manager_cls._tenant_resource
and user["tenant_id"] in visited_tenants):
continue
visited_tenants.add(user["tenant_id"])
manager = self.manager_cls(
admin=admin_client,
user=self._get_cached_client(user),
tenant_uuid=user["tenant_id"])
_publish(self.admin, user, manager)
def _consumer(self, cache, args):
"""Method that consumes single deletion job."""
admin, user, raw_resource = args
manager = self.manager_cls(
resource=raw_resource,
admin=self._get_cached_client(admin),
user=self._get_cached_client(user),
tenant_uuid=user and user["tenant_id"])
if (isinstance(manager.name(), base.NoName) or
rutils.name_matches_object(
manager.name(), *self.resource_classes,
task_id=self.task_id, exact=False)):
self._delete_single_resource(manager)
def exterminate(self):
"""Delete all resources for passed users, admin and resource_mgr."""
broker.run(self._publisher, self._consumer,
consumers_count=self.manager_cls._threads)
def list_resource_names(admin_required=None):
"""List all resource managers names.
Returns all service names and all combination of service.resource names.
:param admin_required: None -> returns all ResourceManagers
True -> returns only admin ResourceManagers
False -> returns only non admin ResourceManagers
"""
res_mgrs = discover.itersubclasses(base.ResourceManager)
if admin_required is not None:
res_mgrs = filter(lambda cls: cls._admin_required == admin_required,
res_mgrs)
names = set()
for cls in res_mgrs:
names.add(cls._service)
names.add("%s.%s" % (cls._service, cls._resource))
return names
def find_resource_managers(names=None, admin_required=None):
"""Returns resource managers.
:param names: List of names in format <service> or <service>.<resource>
that is used for filtering resource manager classes
:param admin_required: None -> returns all ResourceManagers
True -> returns only admin ResourceManagers
False -> returns only non admin ResourceManagers
"""
names = set(names or [])
resource_managers = []
for manager in discover.itersubclasses(base.ResourceManager):
if admin_required is not None:
if admin_required != manager._admin_required:
continue
if (manager._service in names
or "%s.%s" % (manager._service, manager._resource) in names):
resource_managers.append(manager)
resource_managers.sort(key=lambda x: x._order)
found_names = set()
for mgr in resource_managers:
found_names.add(mgr._service)
found_names.add("%s.%s" % (mgr._service, mgr._resource))
missing = names - found_names
if missing:
LOG.warning("Missing resource managers: %s" % ", ".join(missing))
return resource_managers
def cleanup(names=None, admin_required=None, admin=None, users=None,
api_versions=None, superclass=plugin.Plugin, task_id=None):
"""Generic cleaner.
This method goes through all plugins. Filter those and left only plugins
with _service from services or _resource from resources.
Then goes through all passed users and using cleaners cleans all related
resources.
:param names: Use only resource managers that have names in this list.
There are in as _service or
(%s.%s % (_service, _resource)) from
:param admin_required: If None -> return all plugins
If True -> return only admin plugins
If False -> return only non admin plugins
:param admin: rally.deployment.credential.Credential that corresponds to
OpenStack admin.
:param users: List of OpenStack users that was used during testing.
Every user has next structure:
{
"id": <uuid1>,
"tenant_id": <uuid2>,
"credential": <rally.deployment.credential.Credential>
}
:param superclass: The plugin superclass to perform cleanup
for. E.g., this could be
``rally.task.scenario.Scenario`` to cleanup all
Scenario resources.
:param task_id: The UUID of task
"""
resource_classes = [cls for cls in discover.itersubclasses(superclass)
if issubclass(cls, rutils.RandomNameGeneratorMixin)]
if not resource_classes and issubclass(superclass,
rutils.RandomNameGeneratorMixin):
resource_classes.append(superclass)
for manager in find_resource_managers(names, admin_required):
LOG.debug("Cleaning up %(service)s %(resource)s objects"
% {"service": manager._service,
"resource": manager._resource})
SeekAndDestroy(manager, admin, users,
api_versions=api_versions,
resource_classes=resource_classes,
task_id=task_id).exterminate()

View File

@ -1,970 +0,0 @@
# Copyright 2014: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from boto import exception as boto_exception
from neutronclient.common import exceptions as neutron_exceptions
from novaclient import exceptions as nova_exc
from saharaclient.api import base as saharaclient_base
from rally.common import cfg
from rally.common import logging
from rally.plugins.openstack.cleanup import base
from rally.plugins.openstack.services.identity import identity
from rally.plugins.openstack.services.image import glance_v2
from rally.plugins.openstack.services.image import image
from rally.task import utils as task_utils
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
def get_order(start):
return iter(range(start, start + 99))
class SynchronizedDeletion(object):
def is_deleted(self):
return True
class QuotaMixin(SynchronizedDeletion, base.ResourceManager):
# NOTE(andreykurilin): Quotas resources are quite complex in terms of
# cleanup. First of all, they do not have name, id fields at all. There
# is only one identifier - reference to Keystone Project/Tenant. Also,
# we should remove them in case of existing users case... To cover both
# cases we should use project name as name field (it will allow to pass
# existing users case) and project id as id of resource
def list(self):
if not self.tenant_uuid:
return []
client = self._admin_required and self.admin or self.user
project = identity.Identity(client).get_project(self.tenant_uuid)
return [project]
# MAGNUM
_magnum_order = get_order(80)
@base.resource(service=None, resource=None)
class MagnumMixin(base.ResourceManager):
def id(self):
"""Returns id of resource."""
return self.raw_resource.uuid
def list(self):
result = []
marker = None
while True:
resources = self._manager().list(marker=marker)
if not resources:
break
result.extend(resources)
marker = resources[-1].uuid
return result
@base.resource("magnum", "clusters", order=next(_magnum_order),
tenant_resource=True)
class MagnumCluster(MagnumMixin):
"""Resource class for Magnum cluster."""
@base.resource("magnum", "cluster_templates", order=next(_magnum_order),
tenant_resource=True)
class MagnumClusterTemplate(MagnumMixin):
"""Resource class for Magnum cluster_template."""
# HEAT
@base.resource("heat", "stacks", order=100, tenant_resource=True)
class HeatStack(base.ResourceManager):
def name(self):
return self.raw_resource.stack_name
# SENLIN
_senlin_order = get_order(150)
@base.resource(service=None, resource=None, admin_required=True)
class SenlinMixin(base.ResourceManager):
def id(self):
return self.raw_resource["id"]
def _manager(self):
client = self._admin_required and self.admin or self.user
return getattr(client, self._service)()
def list(self):
return getattr(self._manager(), self._resource)()
def delete(self):
# make singular form of resource name from plural form
res_name = self._resource[:-1]
return getattr(self._manager(), "delete_%s" % res_name)(self.id())
@base.resource("senlin", "clusters",
admin_required=True, order=next(_senlin_order))
class SenlinCluster(SenlinMixin):
"""Resource class for Senlin Cluster."""
@base.resource("senlin", "profiles", order=next(_senlin_order),
admin_required=False, tenant_resource=True)
class SenlinProfile(SenlinMixin):
"""Resource class for Senlin Profile."""
# NOVA
_nova_order = get_order(200)
@base.resource("nova", "servers", order=next(_nova_order),
tenant_resource=True)
class NovaServer(base.ResourceManager):
def list(self):
"""List all servers."""
return self._manager().list(limit=-1)
def delete(self):
if getattr(self.raw_resource, "OS-EXT-STS:locked", False):
self.raw_resource.unlock()
super(NovaServer, self).delete()
@base.resource("nova", "server_groups", order=next(_nova_order),
tenant_resource=True)
class NovaServerGroups(base.ResourceManager):
pass
@base.resource("nova", "keypairs", order=next(_nova_order))
class NovaKeypair(SynchronizedDeletion, base.ResourceManager):
pass
@base.resource("nova", "quotas", order=next(_nova_order),
admin_required=True, tenant_resource=True)
class NovaQuotas(QuotaMixin):
pass
@base.resource("nova", "flavors", order=next(_nova_order),
admin_required=True, perform_for_admin_only=True)
class NovaFlavors(base.ResourceManager):
pass
def is_deleted(self):
try:
self._manager().get(self.name())
except nova_exc.NotFound:
return True
return False
@base.resource("nova", "aggregates", order=next(_nova_order),
admin_required=True, perform_for_admin_only=True)
class NovaAggregate(SynchronizedDeletion, base.ResourceManager):
def delete(self):
for host in self.raw_resource.hosts:
self.raw_resource.remove_host(host)
super(NovaAggregate, self).delete()
# EC2
_ec2_order = get_order(250)
class EC2Mixin(object):
def _manager(self):
return getattr(self.user, self._service)()
@base.resource("ec2", "servers", order=next(_ec2_order))
class EC2Server(EC2Mixin, base.ResourceManager):
def is_deleted(self):
try:
instances = self._manager().get_only_instances(
instance_ids=[self.id()])
except boto_exception.EC2ResponseError as e:
# NOTE(wtakase): Nova EC2 API returns 'InvalidInstanceID.NotFound'
# if instance not found. In this case, we consider
# instance has already been deleted.
return getattr(e, "error_code") == "InvalidInstanceID.NotFound"
# NOTE(wtakase): After instance deletion, instance can be 'terminated'
# state. If all instance states are 'terminated', this
# returns True. And if get_only_instances() returns an
# empty list, this also returns True because we consider
# instance has already been deleted.
return all(map(lambda i: i.state == "terminated", instances))
def delete(self):
self._manager().terminate_instances(instance_ids=[self.id()])
def list(self):
return self._manager().get_only_instances()
# NEUTRON
_neutron_order = get_order(300)
@base.resource(service=None, resource=None, admin_required=True)
class NeutronMixin(SynchronizedDeletion, base.ResourceManager):
# Neutron has the best client ever, so we need to override everything
def supports_extension(self, extension):
exts = self._manager().list_extensions().get("extensions", [])
if any(ext.get("alias") == extension for ext in exts):
return True
return False
def _manager(self):
client = self._admin_required and self.admin or self.user
return getattr(client, self._service)()
def id(self):
return self.raw_resource["id"]
def name(self):
return self.raw_resource["name"]
def delete(self):
delete_method = getattr(self._manager(), "delete_%s" % self._resource)
delete_method(self.id())
def list(self):
if self._resource.endswith("y"):
resources = self._resource[:-1] + "ies"
else:
resources = self._resource + "s"
list_method = getattr(self._manager(), "list_%s" % resources)
result = list_method(tenant_id=self.tenant_uuid)[resources]
if self.tenant_uuid:
result = [r for r in result if r["tenant_id"] == self.tenant_uuid]
return result
class NeutronLbaasV1Mixin(NeutronMixin):
def list(self):
if self.supports_extension("lbaas"):
return super(NeutronLbaasV1Mixin, self).list()
return []
@base.resource("neutron", "vip", order=next(_neutron_order),
tenant_resource=True)
class NeutronV1Vip(NeutronLbaasV1Mixin):
pass
@base.resource("neutron", "health_monitor", order=next(_neutron_order),
tenant_resource=True)
class NeutronV1Healthmonitor(NeutronLbaasV1Mixin):
pass
@base.resource("neutron", "pool", order=next(_neutron_order),
tenant_resource=True)
class NeutronV1Pool(NeutronLbaasV1Mixin):
pass
class NeutronLbaasV2Mixin(NeutronMixin):
def list(self):
if self.supports_extension("lbaasv2"):
return super(NeutronLbaasV2Mixin, self).list()
return []
@base.resource("neutron", "loadbalancer", order=next(_neutron_order),
tenant_resource=True)
class NeutronV2Loadbalancer(NeutronLbaasV2Mixin):
def is_deleted(self):
try:
self._manager().show_loadbalancer(self.id())
except Exception as e:
return getattr(e, "status_code", 400) == 404
return False
@base.resource("neutron", "bgpvpn", order=next(_neutron_order),
admin_required=True, perform_for_admin_only=True)
class NeutronBgpvpn(NeutronMixin):
def list(self):
if self.supports_extension("bgpvpn"):
return self._manager().list_bgpvpns()["bgpvpns"]
return []
@base.resource("neutron", "floatingip", order=next(_neutron_order),
tenant_resource=True)
class NeutronFloatingIP(NeutronMixin):
def name(self):
return self.raw_resource.get("description", "")
def list(self):
if CONF.openstack.pre_newton_neutron:
# NOTE(andreykurilin): Neutron API of pre-newton openstack
# releases does not support description field in Floating IPs.
# We do not want to remove not-rally resources, so let's just do
# nothing here and move pre-newton logic into separate plugins
return []
return super(NeutronFloatingIP, self).list()
@base.resource("neutron", "port", order=next(_neutron_order),
tenant_resource=True)
class NeutronPort(NeutronMixin):
# NOTE(andreykurilin): port is the kind of resource that can be created
# automatically. In this case it doesn't have name field which matches
# our resource name templates.
ROUTER_INTERFACE_OWNERS = ("network:router_interface",
"network:router_interface_distributed",
"network:ha_router_replicated_interface")
ROUTER_GATEWAY_OWNER = "network:router_gateway"
def __init__(self, *args, **kwargs):
super(NeutronPort, self).__init__(*args, **kwargs)
self._cache = {}
def _get_resources(self, resource):
if resource not in self._cache:
resources = getattr(self._manager(), "list_%s" % resource)()
self._cache[resource] = [r for r in resources[resource]
if r["tenant_id"] == self.tenant_uuid]
return self._cache[resource]
def list(self):
ports = self._get_resources("ports")
for port in ports:
if not port.get("name"):
parent_name = None
if (port["device_owner"] in self.ROUTER_INTERFACE_OWNERS or
port["device_owner"] == self.ROUTER_GATEWAY_OWNER):
# first case is a port created while adding an interface to
# the subnet
# second case is a port created while adding gateway for
# the network
port_router = [r for r in self._get_resources("routers")
if r["id"] == port["device_id"]]
if port_router:
parent_name = port_router[0]["name"]
if parent_name:
port["parent_name"] = parent_name
return ports
def name(self):
return self.raw_resource.get("parent_name",
self.raw_resource.get("name", ""))
def delete(self):
device_owner = self.raw_resource["device_owner"]
if (device_owner in self.ROUTER_INTERFACE_OWNERS or
device_owner == self.ROUTER_GATEWAY_OWNER):
if device_owner == self.ROUTER_GATEWAY_OWNER:
self._manager().remove_gateway_router(
self.raw_resource["device_id"])
self._manager().remove_interface_router(
self.raw_resource["device_id"], {"port_id": self.id()})
else:
try:
self._manager().delete_port(self.id())
except neutron_exceptions.PortNotFoundClient:
# Port can be already auto-deleted, skip silently
LOG.debug("Port %s was not deleted. Skip silently because "
"port can be already auto-deleted." % self.id())
@base.resource("neutron", "subnet", order=next(_neutron_order),
tenant_resource=True)
class NeutronSubnet(NeutronMixin):
pass
@base.resource("neutron", "network", order=next(_neutron_order),
tenant_resource=True)
class NeutronNetwork(NeutronMixin):
pass
@base.resource("neutron", "router", order=next(_neutron_order),
tenant_resource=True)
class NeutronRouter(NeutronMixin):
pass
@base.resource("neutron", "security_group", order=next(_neutron_order),
tenant_resource=True)
class NeutronSecurityGroup(NeutronMixin):
def list(self):
tenant_sgs = super(NeutronSecurityGroup, self).list()
# NOTE(pirsriva): Filter out "default" security group deletion
# by non-admin role user
return filter(lambda r: r["name"] != "default",
tenant_sgs)
@base.resource("neutron", "quota", order=next(_neutron_order),
admin_required=True, tenant_resource=True)
class NeutronQuota(QuotaMixin):
def delete(self):
self.admin.neutron().delete_quota(self.tenant_uuid)
# CINDER
_cinder_order = get_order(400)
@base.resource("cinder", "backups", order=next(_cinder_order),
tenant_resource=True)
class CinderVolumeBackup(base.ResourceManager):
pass
@base.resource("cinder", "volume_types", order=next(_cinder_order),
admin_required=True, perform_for_admin_only=True)
class CinderVolumeType(base.ResourceManager):
pass
@base.resource("cinder", "volume_snapshots", order=next(_cinder_order),
tenant_resource=True)
class CinderVolumeSnapshot(base.ResourceManager):
pass
@base.resource("cinder", "transfers", order=next(_cinder_order),
tenant_resource=True)
class CinderVolumeTransfer(base.ResourceManager):
pass
@base.resource("cinder", "volumes", order=next(_cinder_order),
tenant_resource=True)
class CinderVolume(base.ResourceManager):
pass
@base.resource("cinder", "image_volumes_cache", order=next(_cinder_order),
admin_required=True, perform_for_admin_only=True)
class CinderImageVolumeCache(base.ResourceManager):
def _glance(self):
return image.Image(self.admin)
def _manager(self):
return self.admin.cinder().volumes
def list(self):
images = dict(("image-%s" % i.id, i)
for i in self._glance().list_images())
return [{"volume": v, "image": images[v.name]}
for v in self._manager().list(search_opts={"all_tenants": 1})
if v.name in images]
def name(self):
return self.raw_resource["image"].name
def id(self):
return self.raw_resource["volume"].id
@base.resource("cinder", "quotas", order=next(_cinder_order),
admin_required=True, tenant_resource=True)
class CinderQuotas(QuotaMixin, base.ResourceManager):
pass
@base.resource("cinder", "qos_specs", order=next(_cinder_order),
admin_required=True, perform_for_admin_only=True)
class CinderQos(base.ResourceManager):
pass
# MANILA
_manila_order = get_order(450)
@base.resource("manila", "shares", order=next(_manila_order),
tenant_resource=True)
class ManilaShare(base.ResourceManager):
pass
@base.resource("manila", "share_networks", order=next(_manila_order),
tenant_resource=True)
class ManilaShareNetwork(base.ResourceManager):
pass
@base.resource("manila", "security_services", order=next(_manila_order),
tenant_resource=True)
class ManilaSecurityService(base.ResourceManager):
pass
# GLANCE
@base.resource("glance", "images", order=500, tenant_resource=True)
class GlanceImage(base.ResourceManager):
def _client(self):
return image.Image(self.admin or self.user)
def list(self):
images = (self._client().list_images(owner=self.tenant_uuid) +
self._client().list_images(status="deactivated",
owner=self.tenant_uuid))
return images
def delete(self):
client = self._client()
if self.raw_resource.status == "deactivated":
glancev2 = glance_v2.GlanceV2Service(self.admin or self.user)
glancev2.reactivate_image(self.raw_resource.id)
client.delete_image(self.raw_resource.id)
task_utils.wait_for_status(
self.raw_resource, ["deleted"],
check_deletion=True,
update_resource=self._client().get_image,
timeout=CONF.openstack.glance_image_delete_timeout,
check_interval=CONF.openstack.glance_image_delete_poll_interval)
# SAHARA
_sahara_order = get_order(600)
@base.resource("sahara", "job_executions", order=next(_sahara_order),
tenant_resource=True)
class SaharaJobExecution(SynchronizedDeletion, base.ResourceManager):
pass
@base.resource("sahara", "jobs", order=next(_sahara_order),
tenant_resource=True)
class SaharaJob(SynchronizedDeletion, base.ResourceManager):
pass
@base.resource("sahara", "job_binary_internals", order=next(_sahara_order),
tenant_resource=True)
class SaharaJobBinaryInternals(SynchronizedDeletion, base.ResourceManager):
pass
@base.resource("sahara", "job_binaries", order=next(_sahara_order),
tenant_resource=True)
class SaharaJobBinary(SynchronizedDeletion, base.ResourceManager):
pass
@base.resource("sahara", "data_sources", order=next(_sahara_order),
tenant_resource=True)
class SaharaDataSource(SynchronizedDeletion, base.ResourceManager):
pass
@base.resource("sahara", "clusters", order=next(_sahara_order),
tenant_resource=True)
class SaharaCluster(base.ResourceManager):
# Need special treatment for Sahara Cluster because of the way the
# exceptions are described in:
# https://github.com/openstack/python-saharaclient/blob/master/
# saharaclient/api/base.py#L145
def is_deleted(self):
try:
self._manager().get(self.id())
return False
except saharaclient_base.APIException as e:
return e.error_code == 404
@base.resource("sahara", "cluster_templates", order=next(_sahara_order),
tenant_resource=True)
class SaharaClusterTemplate(SynchronizedDeletion, base.ResourceManager):
pass
@base.resource("sahara", "node_group_templates", order=next(_sahara_order),
tenant_resource=True)
class SaharaNodeGroup(SynchronizedDeletion, base.ResourceManager):
pass
# CEILOMETER
@base.resource("ceilometer", "alarms", order=700, tenant_resource=True)
class CeilometerAlarms(SynchronizedDeletion, base.ResourceManager):
def id(self):
return self.raw_resource.alarm_id
def list(self):
query = [{
"field": "project_id",
"op": "eq",
"value": self.tenant_uuid
}]
return self._manager().list(q=query)
# ZAQAR
@base.resource("zaqar", "queues", order=800)
class ZaqarQueues(SynchronizedDeletion, base.ResourceManager):
def list(self):
return self.user.zaqar().queues()
# DESIGNATE
_designate_order = get_order(900)
class DesignateResource(SynchronizedDeletion, base.ResourceManager):
# TODO(boris-42): This should be handled somewhere else.
NAME_PREFIX = "s_rally_"
def _manager(self, resource=None):
# Map resource names to api / client version
resource = resource or self._resource
version = {
"domains": "1",
"servers": "1",
"records": "1",
"recordsets": "2",
"zones": "2"
}[resource]
client = self._admin_required and self.admin or self.user
return getattr(getattr(client, self._service)(version), resource)
def id(self):
"""Returns id of resource."""
return self.raw_resource["id"]
def name(self):
"""Returns name of resource."""
return self.raw_resource["name"]
def list(self):
return [item for item in self._manager().list()
if item["name"].startswith(self.NAME_PREFIX)]
@base.resource("designate", "domains", order=next(_designate_order),
tenant_resource=True, threads=1)
class DesignateDomain(DesignateResource):
pass
@base.resource("designate", "servers", order=next(_designate_order),
admin_required=True, perform_for_admin_only=True, threads=1)
class DesignateServer(DesignateResource):
pass
@base.resource("designate", "zones", order=next(_designate_order),
tenant_resource=True, threads=1)
class DesignateZones(DesignateResource):
def list(self):
marker = None
criterion = {"name": "%s*" % self.NAME_PREFIX}
while True:
items = self._manager().list(marker=marker, limit=100,
criterion=criterion)
if not items:
break
for item in items:
yield item
marker = items[-1]["id"]
# SWIFT
_swift_order = get_order(1000)
class SwiftMixin(SynchronizedDeletion, base.ResourceManager):
def _manager(self):
client = self._admin_required and self.admin or self.user
return getattr(client, self._service)()
def id(self):
return self.raw_resource
def name(self):
# NOTE(stpierre): raw_resource is a list of either [container
# name, object name] (as in SwiftObject) or just [container
# name] (as in SwiftContainer).
return self.raw_resource[-1]
def delete(self):
delete_method = getattr(self._manager(), "delete_%s" % self._resource)
# NOTE(weiwu): *self.raw_resource is required because for deleting
# container we are passing only container name, to delete object we
# should pass as first argument container and second is object name.
delete_method(*self.raw_resource)
@base.resource("swift", "object", order=next(_swift_order),
tenant_resource=True)
class SwiftObject(SwiftMixin):
def list(self):
object_list = []
containers = self._manager().get_account(full_listing=True)[1]
for con in containers:
objects = self._manager().get_container(con["name"],
full_listing=True)[1]
for obj in objects:
raw_resource = [con["name"], obj["name"]]
object_list.append(raw_resource)
return object_list
@base.resource("swift", "container", order=next(_swift_order),
tenant_resource=True)
class SwiftContainer(SwiftMixin):
def list(self):
containers = self._manager().get_account(full_listing=True)[1]
return [[con["name"]] for con in containers]
# MISTRAL
_mistral_order = get_order(1100)
@base.resource("mistral", "workbooks", order=next(_mistral_order),
tenant_resource=True)
class MistralWorkbooks(SynchronizedDeletion, base.ResourceManager):
def delete(self):
self._manager().delete(self.raw_resource.name)
@base.resource("mistral", "workflows", order=next(_mistral_order),
tenant_resource=True)
class MistralWorkflows(SynchronizedDeletion, base.ResourceManager):
pass
@base.resource("mistral", "executions", order=next(_mistral_order),
tenant_resource=True)
class MistralExecutions(SynchronizedDeletion, base.ResourceManager):
def name(self):
# NOTE(andreykurilin): Mistral Execution doesn't have own name which
# we can use for filtering, but it stores workflow id and name, even
# after workflow deletion.
return self.raw_resource.workflow_name
# MURANO
_murano_order = get_order(1200)
@base.resource("murano", "environments", tenant_resource=True,
order=next(_murano_order))
class MuranoEnvironments(SynchronizedDeletion, base.ResourceManager):
pass
@base.resource("murano", "packages", tenant_resource=True,
order=next(_murano_order))
class MuranoPackages(base.ResourceManager):
def list(self):
return filter(lambda x: x.name != "Core library",
super(MuranoPackages, self).list())
# IRONIC
_ironic_order = get_order(1300)
@base.resource("ironic", "node", admin_required=True,
order=next(_ironic_order), perform_for_admin_only=True)
class IronicNodes(base.ResourceManager):
def id(self):
return self.raw_resource.uuid
# GNOCCHI
_gnocchi_order = get_order(1400)
@base.resource("gnocchi", "archive_policy_rule", order=next(_gnocchi_order),
admin_required=True, perform_for_admin_only=True)
class GnocchiArchivePolicyRule(base.ResourceManager):
def name(self):
return self.raw_resource["name"]
def id(self):
return self.raw_resource["name"]
# WATCHER
_watcher_order = get_order(1500)
class WatcherMixin(SynchronizedDeletion, base.ResourceManager):
def id(self):
return self.raw_resource.uuid
def list(self):
return self._manager().list(limit=0)
def is_deleted(self):
from watcherclient.common.apiclient import exceptions
try:
self._manager().get(self.id())
return False
except exceptions.NotFound:
return True
@base.resource("watcher", "audit_template", order=next(_watcher_order),
admin_required=True, perform_for_admin_only=True)
class WatcherTemplate(WatcherMixin):
pass
@base.resource("watcher", "action_plan", order=next(_watcher_order),
admin_required=True, perform_for_admin_only=True)
class WatcherActionPlan(WatcherMixin):
def name(self):
return base.NoName(self._resource)
@base.resource("watcher", "audit", order=next(_watcher_order),
admin_required=True, perform_for_admin_only=True)
class WatcherAudit(WatcherMixin):
def name(self):
return self.raw_resource.uuid
# KEYSTONE
_keystone_order = get_order(9000)
class KeystoneMixin(SynchronizedDeletion):
def _manager(self):
return identity.Identity(self.admin)
def delete(self):
delete_method = getattr(self._manager(), "delete_%s" % self._resource)
delete_method(self.id())
def list(self):
resources = self._resource + "s"
return getattr(self._manager(), "list_%s" % resources)()
@base.resource("keystone", "user", order=next(_keystone_order),
admin_required=True, perform_for_admin_only=True)
class KeystoneUser(KeystoneMixin, base.ResourceManager):
pass
@base.resource("keystone", "project", order=next(_keystone_order),
admin_required=True, perform_for_admin_only=True)
class KeystoneProject(KeystoneMixin, base.ResourceManager):
pass
@base.resource("keystone", "service", order=next(_keystone_order),
admin_required=True, perform_for_admin_only=True)
class KeystoneService(KeystoneMixin, base.ResourceManager):
pass
@base.resource("keystone", "role", order=next(_keystone_order),
admin_required=True, perform_for_admin_only=True)
class KeystoneRole(KeystoneMixin, base.ResourceManager):
pass
# NOTE(andreykurilin): unfortunately, ec2 credentials doesn't have name
# and id fields. It makes impossible to identify resources belonging to
# particular task.
@base.resource("keystone", "ec2", tenant_resource=True,
order=next(_keystone_order))
class KeystoneEc2(SynchronizedDeletion, base.ResourceManager):
def _manager(self):
return identity.Identity(self.user)
def id(self):
return "n/a"
def name(self):
return base.NoName(self._resource)
@property
def user_id(self):
return self.user.keystone.auth_ref.user_id
def list(self):
return self._manager().list_ec2credentials(self.user_id)
def delete(self):
self._manager().delete_ec2credential(
self.user_id, access=self.raw_resource.access)

View File

@ -1,268 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import random
from rally.common import validation
from rally import consts
from rally import exceptions
from rally.plugins.openstack import osclients
from rally.task import context
@validation.configure("check_api_versions")
class CheckOpenStackAPIVersionsValidator(validation.Validator):
"""Additional validation for api_versions context"""
def validate(self, context, config, plugin_cls, plugin_cfg):
for client in plugin_cfg:
client_cls = osclients.OSClient.get(client)
try:
if ("service_type" in plugin_cfg[client] or
"service_name" in plugin_cfg[client]):
client_cls.is_service_type_configurable()
if "version" in plugin_cfg[client]:
client_cls.validate_version(plugin_cfg[client]["version"])
except exceptions.RallyException as e:
return self.fail(
"Invalid settings for '%(client)s': %(error)s" % {
"client": client,
"error": e.format_message()})
@validation.add("check_api_versions")
@context.configure(name="api_versions", platform="openstack", order=150)
class OpenStackAPIVersions(context.Context):
"""Context for specifying OpenStack clients versions and service types.
Some OpenStack services support several API versions. To recognize
the endpoints of each version, separate service types are provided in
Keystone service catalog.
Rally has the map of default service names - service types. But since
service type is an entity, which can be configured manually by admin(
via keystone api) without relation to service name, such map can be
insufficient.
Also, Keystone service catalog does not provide a map types to name
(this statement is true for keystone < 3.3 ).
This context was designed for not-default service types and not-default
API versions usage.
An example of specifying API version:
.. code-block:: json
# In this example we will launch NovaKeypair.create_and_list_keypairs
# scenario on 2.2 api version.
{
"NovaKeypair.create_and_list_keypairs": [
{
"args": {
"key_type": "x509"
},
"runner": {
"type": "constant",
"times": 10,
"concurrency": 2
},
"context": {
"users": {
"tenants": 3,
"users_per_tenant": 2
},
"api_versions": {
"nova": {
"version": 2.2
}
}
}
}
]
}
An example of specifying API version along with service type:
.. code-block:: json
# In this example we will launch CinderVolumes.create_and_attach_volume
# scenario on Cinder V2
{
"CinderVolumes.create_and_attach_volume": [
{
"args": {
"size": 10,
"image": {
"name": "^cirros.*-disk$"
},
"flavor": {
"name": "m1.tiny"
},
"create_volume_params": {
"availability_zone": "nova"
}
},
"runner": {
"type": "constant",
"times": 5,
"concurrency": 1
},
"context": {
"users": {
"tenants": 2,
"users_per_tenant": 2
},
"api_versions": {
"cinder": {
"version": 2,
"service_type": "volumev2"
}
}
}
}
]
}
Also, it possible to use service name as an identifier of service endpoint,
but an admin user is required (Keystone can return map of service
names - types, but such API is permitted only for admin). An example:
.. code-block:: json
# Similar to the previous example, but `service_name` argument is used
# instead of `service_type`
{
"CinderVolumes.create_and_attach_volume": [
{
"args": {
"size": 10,
"image": {
"name": "^cirros.*-disk$"
},
"flavor": {
"name": "m1.tiny"
},
"create_volume_params": {
"availability_zone": "nova"
}
},
"runner": {
"type": "constant",
"times": 5,
"concurrency": 1
},
"context": {
"users": {
"tenants": 2,
"users_per_tenant": 2
},
"api_versions": {
"cinder": {
"version": 2,
"service_name": "cinderv2"
}
}
}
}
]
}
"""
VERSION_SCHEMA = {
"anyOf": [
{"type": "string", "description": "a string-like version."},
{"type": "number", "description": "a number-like version."}
]
}
CONFIG_SCHEMA = {
"type": "object",
"$schema": consts.JSON_SCHEMA,
"patternProperties": {
"^[a-z]+$": {
"type": "object",
"oneOf": [
{
"description": "version only",
"properties": {
"version": VERSION_SCHEMA,
},
"required": ["version"],
"additionalProperties": False
},
{
"description": "version and service_name",
"properties": {
"version": VERSION_SCHEMA,
"service_name": {"type": "string"}
},
"required": ["service_name"],
"additionalProperties": False
},
{
"description": "version and service_type",
"properties": {
"version": VERSION_SCHEMA,
"service_type": {"type": "string"}
},
"required": ["service_type"],
"additionalProperties": False
}
],
}
},
"minProperties": 1,
"additionalProperties": False
}
def setup(self):
# FIXME(andreykurilin): move all checks to validate method.
# use admin only when `service_name` is presented
admin_clients = osclients.Clients(
self.context.get("admin", {}).get("credential"))
clients = osclients.Clients(random.choice(
self.context["users"])["credential"])
services = clients.keystone.service_catalog.get_endpoints()
services_from_admin = None
for client_name, conf in self.config.items():
if "service_type" in conf and conf["service_type"] not in services:
raise exceptions.ValidationError(
"There is no service with '%s' type in your environment."
% conf["service_type"])
elif "service_name" in conf:
if not self.context.get("admin", {}).get("credential"):
raise exceptions.ContextSetupFailure(
ctx_name=self.get_name(),
msg="Setting 'service_name' is admin only operation.")
if not services_from_admin:
services_from_admin = dict(
[(s.name, s.type)
for s in admin_clients.keystone().services.list()])
if conf["service_name"] not in services_from_admin:
raise exceptions.ValidationError(
"There is no '%s' service in your environment"
% conf["service_name"])
# TODO(boris-42): Use separate key ["openstack"]["versions"]
self.context["config"]["api_versions@openstack"][client_name][
"service_type"] = services_from_admin[conf["service_name"]]
# NOTE(boris-42): Required to be backward compatible
self.context["config"]["api_versions"] = (
self.context["config"]["api_versions@openstack"])
def cleanup(self):
# nothing to do here
pass

View File

@ -1,179 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import time
from six import moves
from rally.common import logging
from rally.common import utils as rutils
from rally.common import validation
from rally import consts
from rally import exceptions
from rally.plugins.openstack.scenarios.ceilometer import utils as ceilo_utils
from rally.task import context
LOG = logging.getLogger(__name__)
@validation.add("required_platform", platform="openstack", users=True)
@context.configure(name="ceilometer", platform="openstack", order=450)
class CeilometerSampleGenerator(context.Context):
"""Creates ceilometer samples and resources."""
CONFIG_SCHEMA = {
"type": "object",
"$schema": consts.JSON_SCHEMA,
"properties": {
"counter_name": {
"type": "string"
},
"counter_type": {
"type": "string"
},
"counter_unit": {
"type": "string"
},
"counter_volume": {
"type": "number",
"minimum": 0
},
"resources_per_tenant": {
"type": "integer",
"minimum": 1
},
"samples_per_resource": {
"type": "integer",
"minimum": 1
},
"timestamp_interval": {
"type": "integer",
"minimum": 1
},
"metadata_list": {
"type": "array",
"items": {
"type": "object",
"properties": {
"status": {
"type": "string"
},
"name": {
"type": "string"
},
"deleted": {
"type": "string"
},
"created_at": {
"type": "string"
}
},
"additionalProperties": False
}
},
"batch_size": {
"type": "integer",
"minimum": 1
},
"batches_allow_lose": {
"type": "integer",
"minimum": 0
}
},
"required": ["counter_name", "counter_type", "counter_unit",
"counter_volume"],
"additionalProperties": False
}
DEFAULT_CONFIG = {
"resources_per_tenant": 5,
"samples_per_resource": 5,
"timestamp_interval": 60
}
def _store_batch_samples(self, scenario, batches, batches_allow_lose):
batches_allow_lose = batches_allow_lose or 0
unsuccess = 0
for i, batch in enumerate(batches, start=1):
try:
samples = scenario._create_samples(batch)
except Exception:
unsuccess += 1
LOG.warning("Failed to store batch %d of Ceilometer samples"
" during context creation" % i)
if unsuccess > batches_allow_lose:
raise exceptions.ContextSetupFailure(
ctx_name=self.get_name(),
msg="Context failed to store too many batches of samples")
return samples
def setup(self):
new_sample = {
"counter_name": self.config["counter_name"],
"counter_type": self.config["counter_type"],
"counter_unit": self.config["counter_unit"],
"counter_volume": self.config["counter_volume"],
}
resources = []
for user, tenant_id in rutils.iterate_per_tenants(
self.context["users"]):
self.context["tenants"][tenant_id]["samples"] = []
self.context["tenants"][tenant_id]["resources"] = []
scenario = ceilo_utils.CeilometerScenario(
context={"user": user, "task": self.context["task"]}
)
for i in moves.xrange(self.config["resources_per_tenant"]):
samples_to_create = scenario._make_samples(
count=self.config["samples_per_resource"],
interval=self.config["timestamp_interval"],
metadata_list=self.config.get("metadata_list"),
batch_size=self.config.get("batch_size"),
**new_sample)
samples = self._store_batch_samples(
scenario, samples_to_create,
self.config.get("batches_allow_lose")
)
for sample in samples:
self.context["tenants"][tenant_id]["samples"].append(
sample.to_dict())
self.context["tenants"][tenant_id]["resources"].append(
samples[0].resource_id)
resources.append((user, samples[0].resource_id))
# NOTE(boris-42): Context should wait until samples are processed
from ceilometerclient import exc
for user, resource_id in resources:
scenario = ceilo_utils.CeilometerScenario(
context={"user": user, "task": self.context["task"]})
success = False
for i in range(60):
try:
scenario._get_resource(resource_id)
success = True
break
except exc.HTTPNotFound:
time.sleep(3)
if not success:
raise exceptions.ContextSetupFailure(
ctx_name=self.get_name(),
msg="Ceilometer Resource %s is not found" % resource_id)
def cleanup(self):
# We don't have API for removal of samples and resources
pass

View File

@ -1,61 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import logging
from rally.common import utils
from rally.common import validation
from rally import consts
from rally.plugins.openstack.cleanup import manager as resource_manager
from rally.plugins.openstack import osclients
from rally.plugins.openstack.services.storage import block
from rally.task import context
LOG = logging.getLogger(__name__)
@validation.add("required_platform", platform="openstack", admin=True)
@context.configure(name="volume_types", platform="openstack", order=410)
class VolumeTypeGenerator(context.Context):
"""Adds cinder volumes types."""
CONFIG_SCHEMA = {
"type": "array",
"$schema": consts.JSON_SCHEMA,
"items": {"type": "string"}
}
def setup(self):
admin_clients = osclients.Clients(
self.context.get("admin", {}).get("credential"),
api_info=self.context["config"].get("api_versions"))
cinder_service = block.BlockStorage(
admin_clients,
name_generator=self.generate_random_name,
atomic_inst=self.atomic_actions())
self.context["volume_types"] = []
for vtype_name in self.config:
LOG.debug("Creating Cinder volume type %s" % vtype_name)
vtype = cinder_service.create_volume_type(vtype_name)
self.context["volume_types"].append({"id": vtype.id,
"name": vtype_name})
def cleanup(self):
mather = utils.make_name_matcher(*self.config)
resource_manager.cleanup(
names=["cinder.volume_types"],
admin=self.context["admin"],
api_versions=self.context["config"].get("api_versions"),
superclass=mather,
task_id=self.get_owner_id())

View File

@ -1,83 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import utils as rutils
from rally import consts
from rally.plugins.openstack.cleanup import manager as resource_manager
from rally.plugins.openstack import osclients
from rally.plugins.openstack.services.storage import block
from rally.task import context
@context.configure(name="volumes", platform="openstack", order=420)
class VolumeGenerator(context.Context):
"""Creates volumes for each tenant."""
CONFIG_SCHEMA = {
"type": "object",
"$schema": consts.JSON_SCHEMA,
"properties": {
"size": {
"type": "integer",
"minimum": 1
},
"type": {
"oneOf": [{"type": "string",
"description": "a string-like type of volume to "
"create."},
{"type": "null",
"description": "Use default type for volume to "
"create."}]
},
"volumes_per_tenant": {
"type": "integer",
"minimum": 1
}
},
"required": ["size"],
"additionalProperties": False
}
DEFAULT_CONFIG = {
"volumes_per_tenant": 1
}
def setup(self):
size = self.config["size"]
volume_type = self.config.get("type", None)
volumes_per_tenant = self.config["volumes_per_tenant"]
for user, tenant_id in rutils.iterate_per_tenants(
self.context["users"]):
self.context["tenants"][tenant_id].setdefault("volumes", [])
clients = osclients.Clients(
user["credential"],
api_info=self.context["config"].get("api_versions"))
cinder_service = block.BlockStorage(
clients,
name_generator=self.generate_random_name,
atomic_inst=self.atomic_actions())
for i in range(volumes_per_tenant):
vol = cinder_service.create_volume(size,
volume_type=volume_type)
self.context["tenants"][tenant_id]["volumes"].append(
vol._as_dict())
def cleanup(self):
resource_manager.cleanup(
names=["cinder.volumes"],
users=self.context.get("users", []),
api_versions=self.context["config"].get("api_versions"),
superclass=self.__class__,
task_id=self.get_owner_id())

View File

@ -1,40 +0,0 @@
# Copyright 2014: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sys
from rally.common import validation
from rally.plugins.openstack.cleanup import manager
from rally.plugins.openstack.context.cleanup import base
from rally.plugins.openstack import scenario
from rally.task import context
@validation.add(name="check_cleanup_resources", admin_required=True)
# NOTE(amaretskiy): Set order to run this just before UserCleanup
@context.configure(name="admin_cleanup", platform="openstack",
order=(sys.maxsize - 1), hidden=True)
class AdminCleanup(base.CleanupMixin, context.Context):
"""Context class for admin resources cleanup."""
def cleanup(self):
manager.cleanup(
names=self.config,
admin_required=True,
admin=self.context["admin"],
users=self.context.get("users", []),
api_versions=self.context["config"].get("api_versions"),
superclass=scenario.OpenStackScenario,
task_id=self.get_owner_id())

View File

@ -1,53 +0,0 @@
# Copyright 2014: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import validation
from rally import consts
from rally.plugins.openstack.cleanup import manager
@validation.configure("check_cleanup_resources")
class CheckCleanupResourcesValidator(validation.Validator):
def __init__(self, admin_required):
"""Validates that openstack resource managers exist
:param admin_required: describes access level to resource
"""
super(CheckCleanupResourcesValidator, self).__init__()
self.admin_required = admin_required
def validate(self, context, config, plugin_cls, plugin_cfg):
missing = set(plugin_cfg)
missing -= manager.list_resource_names(
admin_required=self.admin_required)
missing = ", ".join(missing)
if missing:
return self.fail(
"Couldn't find cleanup resource managers: %s" % missing)
class CleanupMixin(object):
CONFIG_SCHEMA = {
"type": "array",
"$schema": consts.JSON_SCHEMA,
"items": {
"type": "string",
}
}
def setup(self):
pass

View File

@ -1,40 +0,0 @@
# Copyright 2014: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sys
from rally.common import validation
from rally.plugins.openstack.cleanup import manager
from rally.plugins.openstack.context.cleanup import base
from rally.plugins.openstack import scenario
from rally.task import context
@validation.add(name="check_cleanup_resources", admin_required=False)
# NOTE(amaretskiy): Set maximum order to run this last
@context.configure(name="cleanup", platform="openstack", order=sys.maxsize,
hidden=True)
class UserCleanup(base.CleanupMixin, context.Context):
"""Context class for user resources cleanup."""
def cleanup(self):
manager.cleanup(
names=self.config,
admin_required=False,
users=self.context.get("users", []),
api_versions=self.context["config"].get("api_versions"),
superclass=scenario.OpenStackScenario,
task_id=self.get_owner_id()
)

View File

@ -1,154 +0,0 @@
# Copyright 2016: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pkgutil
from rally.common import utils as rutils
from rally.common import validation
from rally import consts
from rally import exceptions
from rally.plugins.openstack.cleanup import manager as resource_manager
from rally.plugins.openstack import osclients
from rally.plugins.openstack.scenarios.heat import utils as heat_utils
from rally.task import context
def get_data(filename_or_resource):
if isinstance(filename_or_resource, list):
return pkgutil.get_data(*filename_or_resource)
return open(filename_or_resource).read()
@validation.add("required_platform", platform="openstack", users=True)
@context.configure(name="heat_dataplane", platform="openstack", order=435)
class HeatDataplane(context.Context):
"""Context class for create stack by given template.
This context will create stacks by given template for each tenant and
add details to context. Following details will be added:
* id of stack;
* template file contents;
* files dictionary;
* stack parameters;
Heat template should define a "gate" node which will interact with Rally
by ssh and workload nodes by any protocol. To make this possible heat
template should accept the following parameters:
* network_id: id of public network
* router_id: id of external router to connect "gate" node
* key_name: name of nova ssh keypair to use for "gate" node
"""
FILE_SCHEMA = {
"description": "",
"type": "string",
}
RESOURCE_SCHEMA = {
"description": "",
"type": "array",
"minItems": 2,
"maxItems": 2,
"items": {"type": "string"}
}
CONFIG_SCHEMA = {
"type": "object",
"$schema": consts.JSON_SCHEMA,
"properties": {
"stacks_per_tenant": {
"type": "integer",
"minimum": 1
},
"template": {
"oneOf": [FILE_SCHEMA, RESOURCE_SCHEMA],
},
"files": {
"type": "object",
"additionalProperties": True
},
"parameters": {
"type": "object",
"additionalProperties": True
},
"context_parameters": {
"type": "object",
"additionalProperties": True
},
},
"additionalProperties": False
}
DEFAULT_CONFIG = {
"stacks_per_tenant": 1,
}
def _get_context_parameter(self, user, tenant_id, path):
value = {"user": user, "tenant": self.context["tenants"][tenant_id]}
for key in path.split("."):
try:
# try to cast string to int in order to support integer keys
# e.g 'spam.1.eggs' will be translated to ["spam"][1]["eggs"]
key = int(key)
except ValueError:
pass
try:
value = value[key]
except KeyError:
raise exceptions.RallyException(
"There is no key %s in context" % path)
return value
def _get_public_network_id(self):
nc = osclients.Clients(self.context["admin"]["credential"]).neutron()
networks = nc.list_networks(**{"router:external": True})["networks"]
return networks[0]["id"]
def setup(self):
template = get_data(self.config["template"])
files = {}
for key, filename in self.config.get("files", {}).items():
files[key] = get_data(filename)
parameters = self.config.get("parameters", rutils.LockedDict())
with parameters.unlocked():
if "network_id" not in parameters:
parameters["network_id"] = self._get_public_network_id()
for user, tenant_id in rutils.iterate_per_tenants(
self.context["users"]):
for name, path in self.config.get("context_parameters",
{}).items():
parameters[name] = self._get_context_parameter(user,
tenant_id,
path)
if "router_id" not in parameters:
networks = self.context["tenants"][tenant_id]["networks"]
parameters["router_id"] = networks[0]["router_id"]
if "key_name" not in parameters:
parameters["key_name"] = user["keypair"]["name"]
heat_scenario = heat_utils.HeatScenario(
{"user": user, "task": self.context["task"],
"owner_id": self.context["owner_id"]})
self.context["tenants"][tenant_id]["stack_dataplane"] = []
for i in range(self.config["stacks_per_tenant"]):
stack = heat_scenario._create_stack(template, files=files,
parameters=parameters)
tenant_data = self.context["tenants"][tenant_id]
tenant_data["stack_dataplane"].append([stack.id, template,
files, parameters])
def cleanup(self):
resource_manager.cleanup(names=["heat.stacks"],
users=self.context.get("users", []),
superclass=heat_utils.HeatScenario,
task_id=self.get_owner_id())

View File

@ -1,60 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import utils as rutils
from rally.common import validation
from rally import consts
from rally.plugins.openstack.cleanup import manager as resource_manager
from rally.plugins.openstack.scenarios.designate import utils
from rally.task import context
@validation.add("required_platform", platform="openstack", users=True)
@context.configure(name="zones", platform="openstack", order=600)
class ZoneGenerator(context.Context):
"""Context to add `zones_per_tenant` zones for each tenant."""
CONFIG_SCHEMA = {
"type": "object",
"$schema": consts.JSON_SCHEMA,
"properties": {
"zones_per_tenant": {
"type": "integer",
"minimum": 1
},
},
"additionalProperties": False
}
DEFAULT_CONFIG = {
"zones_per_tenant": 1
}
def setup(self):
for user, tenant_id in rutils.iterate_per_tenants(
self.context["users"]):
self.context["tenants"][tenant_id].setdefault("zones", [])
designate_util = utils.DesignateScenario(
{"user": user,
"task": self.context["task"],
"owner_id": self.context["owner_id"]})
for i in range(self.config["zones_per_tenant"]):
zone = designate_util._create_zone()
self.context["tenants"][tenant_id]["zones"].append(zone)
def cleanup(self):
resource_manager.cleanup(names=["designate.zones"],
users=self.context.get("users", []),
superclass=utils.DesignateScenario,
task_id=self.get_owner_id())

View File

@ -1,96 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import logging
from rally.common import utils as rutils
from rally import consts
from rally.plugins.openstack.cleanup import manager as resource_manager
from rally.plugins.openstack.scenarios.ec2 import utils as ec2_utils
from rally.plugins.openstack import types
from rally.task import context
LOG = logging.getLogger(__name__)
@context.configure(name="ec2_servers", platform="openstack", order=460)
class EC2ServerGenerator(context.Context):
"""Creates specified amount of nova servers in each tenant uses ec2 API."""
CONFIG_SCHEMA = {
"type": "object",
"$schema": consts.JSON_SCHEMA,
"properties": {
"image": {
"type": "object",
"properties": {
"name": {
"type": "string"
}
},
"additionalProperties": False
},
"flavor": {
"type": "object",
"properties": {
"name": {
"type": "string"
}
},
"additionalProperties": False
},
"servers_per_tenant": {
"type": "integer",
"minimum": 1
}
},
"required": ["image", "flavor", "servers_per_tenant"],
"additionalProperties": False
}
def setup(self):
image = self.config["image"]
flavor = self.config["flavor"]
image_id = types.EC2Image(self.context).pre_process(
resource_spec=image, config={})
for user, tenant_id in rutils.iterate_per_tenants(
self.context["users"]):
LOG.debug("Booting servers for tenant %s " % user["tenant_id"])
ec2_scenario = ec2_utils.EC2Scenario({
"user": user,
"task": self.context["task"],
"owner_id": self.context["owner_id"]})
LOG.debug(
"Calling _boot_servers with "
"image_id=%(image_id)s flavor_name=%(flavor_name)s "
"servers_per_tenant=%(servers_per_tenant)s"
% {"image_id": image_id,
"flavor_name": flavor["name"],
"servers_per_tenant": self.config["servers_per_tenant"]})
servers = ec2_scenario._boot_servers(
image_id, flavor["name"], self.config["servers_per_tenant"])
current_servers = [server.id for server in servers]
self.context["tenants"][tenant_id]["ec2_servers"] = current_servers
def cleanup(self):
resource_manager.cleanup(names=["ec2.servers"],
users=self.context.get("users", []),
superclass=ec2_utils.EC2Scenario,
task_id=self.get_owner_id())

View File

@ -1,210 +0,0 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.common import cfg
from rally.common import logging
from rally.common import utils as rutils
from rally.common import validation
from rally import consts
from rally.plugins.openstack.cleanup import manager as resource_manager
from rally.plugins.openstack import osclients
from rally.plugins.openstack.services.image import image
from rally.task import context
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
@validation.add("required_platform", platform="openstack", users=True)
@context.configure(name="images", platform="openstack", order=410)
class ImageGenerator(context.Context):
"""Uploads specified Glance images to every tenant."""
CONFIG_SCHEMA = {
"type": "object",
"$schema": consts.JSON_SCHEMA,
"properties": {
"image_url": {
"type": "string",
"description": "Location of the source to create image from."
},
"disk_format": {
"description": "The format of the disk.",
"enum": ["qcow2", "raw", "vhd", "vmdk", "vdi", "iso", "aki",
"ari", "ami"]
},
"container_format": {
"description": "Format of the image container.",
"enum": ["aki", "ami", "ari", "bare", "docker", "ova", "ovf"]
},
"image_name": {
"type": "string",
"description": "The name of image to create. NOTE: it will be "
"ignored in case when `images_per_tenant` is "
"bigger then 1."
},
"min_ram": {
"description": "Amount of RAM in MB",
"type": "integer",
"minimum": 0
},
"min_disk": {
"description": "Amount of disk space in GB",
"type": "integer",
"minimum": 0
},
"visibility": {
"description": "Visibility for this image ('shared' and "
"'community' are available only in case of "
"Glance V2).",
"enum": ["public", "private", "shared", "community"]
},
"images_per_tenant": {
"description": "The number of images to create per one single "
"tenant.",
"type": "integer",
"minimum": 1
},
"image_args": {
"description": "This param is deprecated since Rally-0.10.0, "
"specify exact arguments in a root section of "
"context instead.",
"type": "object",
"additionalProperties": True
},
"image_container": {
"description": "This param is deprecated since Rally-0.10.0, "
"use `container_format` instead.",
"type": "string",
},
"image_type": {
"description": "This param is deprecated since Rally-0.10.0, "
"use `disk_format` instead.",
"enum": ["qcow2", "raw", "vhd", "vmdk", "vdi", "iso", "aki",
"ari", "ami"],
},
},
"oneOf": [{"description": "It is been used since Rally 0.10.0",
"required": ["image_url", "disk_format",
"container_format"]},
{"description": "One of backward compatible way",
"required": ["image_url", "image_type",
"container_format"]},
{"description": "One of backward compatible way",
"required": ["image_url", "disk_format",
"image_container"]},
{"description": "One of backward compatible way",
"required": ["image_url", "image_type",
"image_container"]}],
"additionalProperties": False
}
DEFAULT_CONFIG = {"images_per_tenant": 1}
def setup(self):
image_url = self.config.get("image_url")
disk_format = self.config.get("disk_format")
container_format = self.config.get("container_format")
images_per_tenant = self.config.get("images_per_tenant")
visibility = self.config.get("visibility", "private")
min_disk = self.config.get("min_disk", 0)
min_ram = self.config.get("min_ram", 0)
image_args = self.config.get("image_args", {})
if "image_type" in self.config:
LOG.warning("The 'image_type' argument is deprecated since "
"Rally 0.10.0, use disk_format argument instead")
if not disk_format:
disk_format = self.config["image_type"]
if "image_container" in self.config:
LOG.warning("The 'image_container' argument is deprecated since "
"Rally 0.10.0; use container_format argument instead")
if not container_format:
container_format = self.config["image_container"]
if image_args:
LOG.warning(
"The 'image_args' argument is deprecated since Rally 0.10.0; "
"specify arguments in a root section of context instead")
if "is_public" in image_args:
if "visibility" not in self.config:
visibility = ("public" if image_args["is_public"]
else "private")
if "min_ram" in image_args:
if "min_ram" not in self.config:
min_ram = image_args["min_ram"]
if "min_disk" in image_args:
if "min_disk" not in self.config:
min_disk = image_args["min_disk"]
# None image_name means that image.Image will generate a random name
image_name = None
if "image_name" in self.config and images_per_tenant == 1:
image_name = self.config["image_name"]
for user, tenant_id in rutils.iterate_per_tenants(
self.context["users"]):
current_images = []
clients = osclients.Clients(
user["credential"],
api_info=self.context["config"].get("api_versions"))
image_service = image.Image(
clients, name_generator=self.generate_random_name)
for i in range(images_per_tenant):
image_obj = image_service.create_image(
image_name=image_name,
container_format=container_format,
image_location=image_url,
disk_format=disk_format,
visibility=visibility,
min_disk=min_disk,
min_ram=min_ram)
current_images.append(image_obj.id)
self.context["tenants"][tenant_id]["images"] = current_images
def cleanup(self):
if self.context.get("admin", {}):
# NOTE(andreykurilin): Glance does not require the admin for
# listing tenant images, but the admin is required for
# discovering Cinder volumes which might be created for the
# purpose of caching. Removing such volumes are optional step,
# since Cinder should have own mechanism like garbage collector,
# but if we can, let's remove everything and make the cloud as
# close as possible to the original state.
admin = self.context["admin"]
admin_required = None
else:
admin = None
admin_required = False
if "image_name" in self.config:
matcher = rutils.make_name_matcher(self.config["image_name"])
else:
matcher = self.__class__
resource_manager.cleanup(names=["glance.images",
"cinder.image_volumes_cache"],
admin=admin,
admin_required=admin_required,
users=self.context.get("users", []),
api_versions=self.context["config"].get(
"api_versions"),
superclass=matcher,
task_id=self.get_owner_id())

Some files were not shown because too many files have changed in this diff Show More