Field 'deployment_tags' for tests and test_sets alongside with migration to

support these changes were added. To fabfile task that performs creation
of new alembic migration was added. Additional refactoring and non-major
fixes.

cluster_id field was added for test_sets (migration script was
supplied). Logic of discovering test_sets and tests was moved to
controllers (test-set controller). Nose plugin was refactored. Tests was not be
supplied for new features.

Test discovering in controllers. Models description and discovery logic changing

New pure init revision for alembic (other have been deleted). Some non-major refactoring to other files

All system was refactored in order to support new logic of discovering test_sets and tests in ostf runtime

Non-major fix in fabfile. For unit tests logic that controlls data writing to db from tests has been added. Test for nose_discovery have been supplied

Some non-major fixes to nose_discovery plugin. Unit tests for nose_discovery have been finished

deployments_types_tests were added for the purpose of testing new discovery behaviour and with intesions to keep logic defined with dummy_tests. Unit tests for nose discovery have been fixed.

New fake empty tests were added just to keep logic defined in dummy_tests

Unit tests for wsgi_controllers have been fixed

unit tests for wsgi controllers have been finished

Some fixes to wsgi contorollers (now test_set controller returns non
first result from query but all; test controller return all tests with
no test_run id present within it). Nose discovery function was fixed:
now it searches for test_sets for given cluster id without checking data
in db (in this case sqlalchemy merge should prevent from making
dublicates). Some methods in nose_storage_plugin were fixed for the sake
of supporting new behaviour of get_description function (with new
addition value returning - deployment_tags). Attemptings to rewrite
functional tests were made thus TestingAdapterClient was modified. Some
other non-major fixes were added.

New changes to nose_discovery func (checking out of cluster's
redeploying, implementing is not finished). Some changed to controllers
(mostly to returning value). Nose discovery tests were changed in order
to implement future changes. Proper processing of debug_tests path was
added to wsgi_utils.

Some fixes to discovery logic and different tests

BaseTestingAdapter was simplified. Several functional tests have been
fixed.

For Test class new element 'disabled' for statuses enumerate was added.
New initial_migration was added (since models were modified). Functional
tests were fixed to support new logic.

Smth strange with wsgi_utils.

Test for redeployed cluster were added to test_wsgi_controllers.py.
multinode_deployment_tests were fixed.

Change-Id: Ie9acfc6b57c4c3d2d1e5a7480929328fb193bf2c
This commit is contained in:
Artem Roma 2013-09-25 14:44:07 +03:00
parent c7a12a74c3
commit 8c0fa6ed46
32 changed files with 1691 additions and 721 deletions

27
fabfile.py vendored
View File

@ -7,7 +7,10 @@ def createrole(user='ostf', password='ostf'):
def createdb(user='ostf', database='ostf'):
local('psql -U postgres -c "CREATE DATABASE {0} WITH OWNER={1};"'.format(database, user))
local(
'psql -U postgres -c "CREATE DATABASE {0} WITH OWNER={1};"'
.format(database, user)
)
def dropdb(database='ostf'):
@ -28,12 +31,28 @@ def testdeps():
def startserver():
local(('ostf-server '
'--dbpath postgresql+psycopg2://ostf:ostf@localhost/ostf '
'--debug --debug_tests=fuel_plugin/tests/functional/dummy_tests'))
'--dbpath postgresql+psycopg2://ostf:ostf@localhost/ostf '))
def startnailgunmimic():
path = 'fuel_plugin/tests/test_utils/nailgun_mimic.py'
local('python {}'.format(path))
def createmigration(comment):
'''
Supply comment for new alembic revision as a value
for comment argument
'''
config_path = 'fuel_plugin/ostf_adapter/storage/alembic.ini'
local(
'alembic --config {0} revision --autogenerate -m \"{1}\"'
.format(config_path, comment)
)
def migrate(database='ostf'):
path='postgresql+psycopg2://ostf:ostf@localhost/{0}'.format(database)
path = 'postgresql+psycopg2://ostf:ostf@localhost/{0}'.format(database)
local('ostf-server --after-initialization-environment-hook --dbpath {0}'.format(path))

View File

@ -21,7 +21,6 @@ import signal
from fuel_plugin.ostf_adapter import cli_config
from fuel_plugin.ostf_adapter import nailgun_hooks
from fuel_plugin.ostf_adapter import logger
from fuel_plugin.ostf_adapter.nose_plugin import nose_discovery
from gevent import pywsgi
from fuel_plugin.ostf_adapter.wsgi import app
import pecan
@ -38,7 +37,11 @@ def main():
},
'dbpath': cli_args.dbpath,
'debug': cli_args.debug,
'debug_tests': cli_args.debug_tests
'debug_tests': cli_args.debug_tests,
'nailgun': {
'host': cli_args.nailgun_host,
'port': cli_args.nailgun_port
}
}
logger.setup(log_file=cli_args.log_file)
@ -49,7 +52,6 @@ def main():
if getattr(cli_args, 'after_init_hook'):
return nailgun_hooks.after_initialization_environment_hook()
nose_discovery.discovery(cli_args.debug_tests)
host, port = pecan.conf.server.host, pecan.conf.server.port
srv = pywsgi.WSGIServer((host, int(port)), root)

View File

@ -29,6 +29,7 @@ def parse_cli():
parser.add_argument('--port', default='8989')
parser.add_argument('--log_file', default=None, metavar='PATH')
parser.add_argument('--nailgun-host', default='127.0.0.1')
parser.add_argument('--nailgun-port', default='3232')
parser.add_argument('--nailgun-port', default='8000')
parser.add_argument('--debug_tests', default=None)
return parser.parse_args(sys.argv[1:])

View File

@ -94,8 +94,11 @@ class NoseDriver(object):
module_obj.cleanup.cleanup()
except Exception:
LOG.exception('Cleanup errer. Test Run ID %s. Cluster ID %s',
test_run_id, cluser_id)
LOG.exception(
'Cleanup error. Test Run ID %s. Cluster ID %s',
test_run_id,
cluster_id
)
finally:
models.TestRun.update_test_run(

View File

@ -14,6 +14,7 @@
import logging
import os
import pecan
from nose import plugins
@ -22,8 +23,6 @@ from fuel_plugin.ostf_adapter.nose_plugin import nose_utils
from fuel_plugin.ostf_adapter.storage import engine, models
CORE_PATH = 'fuel_health'
LOG = logging.getLogger(__name__)
@ -33,8 +32,9 @@ class DiscoveryPlugin(plugins.Plugin):
name = 'discovery'
score = 15000
def __init__(self):
def __init__(self, deployment_info):
self.test_sets = {}
self.deployment_info = deployment_info
super(DiscoveryPlugin, self).__init__()
def options(self, parser, env=os.environ):
@ -47,13 +47,20 @@ class DiscoveryPlugin(plugins.Plugin):
module = __import__(module, fromlist=[module])
LOG.info('Inspecting %s', filename)
if hasattr(module, '__profile__'):
session = engine.get_session()
with session.begin(subtransactions=True):
LOG.info('%s discovered.', module.__name__)
test_set = models.TestSet(**module.__profile__)
test_set = session.merge(test_set)
session.add(test_set)
self.test_sets[test_set.id] = test_set
profile = module.__profile__
if set(profile.get('deployment_tags', []))\
.issubset(self.deployment_info['deployment_tags']):
profile['cluster_id'] = self.deployment_info['cluster_id']
session = engine.get_session()
with session.begin(subtransactions=True):
LOG.info('%s discovered.', module.__name__)
test_set = models.TestSet(**profile)
test_set = session.merge(test_set)
session.add(test_set)
self.test_sets[test_set.id] = test_set
def addSuccess(self, test):
test_id = test.id()
@ -61,27 +68,47 @@ class DiscoveryPlugin(plugins.Plugin):
if test_set_id in test_id:
session = engine.get_session()
with session.begin(subtransactions=True):
LOG.info('%s added for %s', test_id, test_set_id)
data = dict()
data['title'], data['description'], data['duration'] = \
data['cluster_id'] = self.deployment_info['cluster_id']
(data['title'], data['description'],
data['duration'], data['deployment_tags']) = \
nose_utils.get_description(test)
old_test_obj = session.query(models.Test).filter_by(
name=test_id, test_set_id=test_set_id,
test_run_id=None).\
update(data, synchronize_session=False)
if not old_test_obj:
data.update({'test_set_id': test_set_id,
'name': test_id})
test_obj = models.Test(**data)
session.add(test_obj)
if set(data['deployment_tags'])\
.issubset(self.deployment_info['deployment_tags']):
data.update(
{
'test_set_id': test_set_id,
'name': test_id
}
)
#merge doesn't work here so we must check
#tests existing with such test_set_id and cluster_id
#so we won't ended up with dublicating data upon tests
#in db.
tests = session.query(models.Test)\
.filter_by(cluster_id=self.test_sets[test_set_id].cluster_id)\
.filter_by(test_set_id=test_set_id)\
.filter_by(test_run_id=None)\
.filter_by(name=data['name'])\
.first()
if not tests:
LOG.info('%s added for %s', test_id, test_set_id)
test_obj = models.Test(**data)
session.add(test_obj)
def discovery(path=None):
def discovery(path, deployment_info={}):
"""Will discover all tests on provided path and save info in db
"""
path = path if path else CORE_PATH
LOG.info('Starting discovery for %r.', path)
nose_test_runner.SilentTestProgram(
addplugins=[DiscoveryPlugin()],
addplugins=[DiscoveryPlugin(deployment_info)],
exit=False,
argv=['tests_discovery', '--collect-only', path])
argv=['tests_discovery', '--collect-only', path]
)

View File

@ -54,7 +54,7 @@ class StoragePlugin(plugins.Plugin):
'status': status,
'time_taken': self.taken
}
data['title'], data['description'], data['duration'] = \
data['title'], data['description'], data['duration'], data['deployment_tags'] = \
nose_utils.get_description(test)
if err:
exc_type, exc_value, exc_traceback = err

View File

@ -43,6 +43,18 @@ def get_exc_message(exception_value):
return u""
def _process_docstring(docstring, pattern):
pattern_matcher = re.search(pattern, docstring)
if pattern_matcher:
value = pattern_matcher.group(1)
docstring = docstring[:pattern_matcher.start()]
else:
value = None
return docstring, value
def get_description(test_obj):
'''
Parses docstring of test object in order
@ -57,20 +69,34 @@ def get_description(test_obj):
docstring = test_obj.test._testMethodDoc
if docstring:
duration_pattern = r'Duration:.?(?P<duration>.+)'
duration_matcher = re.search(duration_pattern, docstring)
deployment_tags_pattern = r'Deployment tags:.?(?P<tags>.+)?'
docstring, deployment_tags = _process_docstring(
docstring,
deployment_tags_pattern
)
if duration_matcher:
duration = duration_matcher.group(1)
docstring = docstring[:duration_matcher.start()]
#if deployment tags is empty or absent
#_process_docstring returns None so we
#we must check this and prevent
if deployment_tags:
deployment_tags = [
tag.strip() for tag in deployment_tags.split(',')
]
else:
duration = None
deployment_tags = []
duration_pattern = r'Duration:.?(?P<duration>.+)'
docstring, duration = _process_docstring(
docstring,
duration_pattern
)
docstring = docstring.split('\n')
name = docstring.pop(0)
description = u'\n'.join(docstring) if docstring else u""
return name, description, duration
return u"", u"", u""
return name, description, duration, deployment_tags
return u"", u"", u"", []
def modify_test_name_for_nose(test_path):

View File

@ -2,7 +2,7 @@
[alembic]
# path to migration scripts
script_location = migrations
script_location = fuel_plugin/ostf_adapter/storage/migrations
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s

View File

@ -38,5 +38,5 @@ class ListField(JsonField):
super(ListField, self).process_bind_param(value, dialect)
def process_result_value(self, value, dialect):
value = super(ListField, self).process_bind_param(value, dialect)
value = super(ListField, self).process_result_value(value, dialect)
return list(value) if value else []

View File

@ -1,42 +0,0 @@
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Add status field to test run model
Revision ID: 12340edd992d
Revises: 1e2c38f575fb
Create Date: 2013-07-03 17:38:42.632146
"""
# revision identifiers, used by Alembic.
revision = '12340edd992d'
down_revision = '1e2c38f575fb'
from alembic import op
import sqlalchemy as sa
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.add_column(
'test_runs',
sa.Column('status', sa.String(length=128), nullable=True))
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_column('test_runs', 'status')
### end Alembic commands ###

View File

@ -1,94 +0,0 @@
"""Database refactoring
Revision ID: 1b28f7bc6476
Revises: 4e9905279776
Create Date: 2013-08-07 13:20:06.373200
"""
# revision identifiers, used by Alembic.
from fuel_plugin.ostf_adapter.storage import fields
revision = '1b28f7bc6476'
down_revision = '4e9905279776'
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.add_column('test_runs', sa.Column('test_set_id', sa.String(length=128),
nullable=True))
op.add_column('test_runs',
sa.Column('meta', fields.JsonField(), nullable=True))
op.add_column('test_runs',
sa.Column('cluster_id', sa.Integer(), nullable=False))
op.drop_column('test_runs', u'type')
op.drop_column('test_runs', u'stats')
op.drop_column('test_runs', u'external_id')
op.drop_column('test_runs', u'data')
op.alter_column('test_runs', 'status',
existing_type=sa.VARCHAR(length=128),
nullable=False)
op.add_column('test_sets', sa.Column('cleanup_path', sa.String(length=128),
nullable=True))
op.add_column('test_sets',
sa.Column('meta', fields.JsonField(), nullable=True))
op.add_column('test_sets',
sa.Column('driver', sa.String(length=128), nullable=True))
op.add_column('test_sets',
sa.Column('additional_arguments', fields.ListField(),
nullable=True))
op.add_column('test_sets',
sa.Column('test_path', sa.String(length=256), nullable=True))
op.drop_column('test_sets', u'data')
op.add_column('tests', sa.Column('description', sa.Text(), nullable=True))
op.add_column('tests', sa.Column('traceback', sa.Text(), nullable=True))
op.add_column('tests', sa.Column('step', sa.Integer(), nullable=True))
op.add_column('tests', sa.Column('meta', fields.JsonField(),
nullable=True))
op.add_column('tests',
sa.Column('duration', sa.String(length=512), nullable=True))
op.add_column('tests', sa.Column('message', sa.Text(), nullable=True))
op.add_column('tests',
sa.Column('time_taken', sa.Float(), nullable=True))
op.drop_column('tests', u'taken')
op.drop_column('tests', u'data')
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.add_column('tests', sa.Column(u'data', sa.TEXT(), nullable=True))
op.add_column('tests', sa.Column(u'taken',
postgresql.DOUBLE_PRECISION(precision=53),
nullable=True))
op.drop_column('tests', 'time_taken')
op.drop_column('tests', 'message')
op.drop_column('tests', 'duration')
op.drop_column('tests', 'meta')
op.drop_column('tests', 'step')
op.drop_column('tests', 'traceback')
op.drop_column('tests', 'description')
op.add_column('test_sets', sa.Column(u'data', sa.TEXT(), nullable=True))
op.drop_column('test_sets', 'test_path')
op.drop_column('test_sets', 'additional_arguments')
op.drop_column('test_sets', 'driver')
op.drop_column('test_sets', 'meta')
op.drop_column('test_sets', 'cleanup_path')
op.alter_column('test_runs', 'status',
existing_type=sa.VARCHAR(length=128),
nullable=True)
op.add_column('test_runs', sa.Column(u'data', sa.TEXT(), nullable=True))
op.add_column('test_runs',
sa.Column(u'external_id', sa.VARCHAR(length=128),
nullable=True))
op.add_column('test_runs', sa.Column(u'stats', sa.TEXT(), nullable=True))
op.add_column('test_runs',
sa.Column(u'type', sa.VARCHAR(length=128), nullable=True))
op.drop_column('test_runs', 'cluster_id')
op.drop_column('test_runs', 'meta')
op.drop_column('test_runs', 'test_set_id')
### end Alembic commands ###

View File

@ -1,54 +0,0 @@
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Storage refactor
Revision ID: 1e2c38f575fb
Revises: 1fcb29d29e03
Create Date: 2013-07-02 22:43:00.844574
"""
# revision identifiers, used by Alembic.
revision = '1e2c38f575fb'
down_revision = '1fcb29d29e03'
from alembic import op
import sqlalchemy as sa
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.create_table(
'test_sets',
sa.Column('id', sa.String(length=128), nullable=False),
sa.Column('description', sa.String(length=128), nullable=True),
sa.Column('data', sa.Text(), nullable=True),
sa.PrimaryKeyConstraint('id')
)
op.add_column(u'test_runs', sa.Column('external_id', sa.String(length=128),
nullable=True))
op.add_column(u'test_runs', sa.Column('stats', sa.Text(), nullable=True))
op.add_column(u'tests', sa.Column('test_set_id', sa.String(length=128),
nullable=True))
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_column(u'tests', 'test_set_id')
op.drop_column(u'test_runs', 'stats')
op.drop_column(u'test_runs', 'external_id')
op.drop_table('test_sets')
### end Alembic commands ###

View File

@ -1,60 +0,0 @@
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""initial migration
Revision ID: 1fcb29d29e03
Revises: None
Create Date: 2013-06-26 17:40:23.908062
"""
# revision identifiers, used by Alembic.
revision = '1fcb29d29e03'
down_revision = None
from alembic import op
import sqlalchemy as sa
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.create_table(
'test_runs',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('type', sa.String(length=128), nullable=True),
sa.Column('data', sa.Text(), nullable=True),
sa.PrimaryKeyConstraint('id')
)
op.create_table(
'tests',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('name', sa.String(length=512), nullable=True),
sa.Column('status', sa.String(length=128), nullable=True),
sa.Column('taken', sa.Float(), nullable=True),
sa.Column('data', sa.Text(), nullable=True),
sa.Column('test_run_id', sa.Integer(), nullable=True),
sa.ForeignKeyConstraint(['test_run_id'], ['test_runs.id'], ),
sa.PrimaryKeyConstraint('id')
)
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_table('tests')
op.drop_table('test_runs')
### end Alembic commands ###

View File

@ -1,27 +0,0 @@
"""add title column
Revision ID: 3e45add6471
Revises: 1b28f7bc6476
Create Date: 2013-08-07 17:01:59.306413
"""
# revision identifiers, used by Alembic.
revision = '3e45add6471'
down_revision = '1b28f7bc6476'
from alembic import op
import sqlalchemy as sa
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.add_column('tests', sa.Column('title', sa.String(length=512),
nullable=True))
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_column('tests', 'title')
### end Alembic commands ###

View File

@ -0,0 +1,72 @@
"""initial_migration
Revision ID: 490f0056d38a
Revises: None
Create Date: 2013-10-07 12:47:57.099110
"""
# revision identifiers, used by Alembic.
revision = '490f0056d38a'
down_revision = None
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
from fuel_plugin.ostf_adapter.storage.fields import JsonField, ListField
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.create_table('test_sets',
sa.Column('id', sa.String(length=128), nullable=False),
sa.Column('cluster_id', sa.Integer(), autoincrement=False, nullable=False),
sa.Column('description', sa.String(length=256), nullable=True),
sa.Column('test_path', sa.String(length=256), nullable=True),
sa.Column('driver', sa.String(length=128), nullable=True),
sa.Column('additional_arguments', ListField(), nullable=True),
sa.Column('cleanup_path', sa.String(length=128), nullable=True),
sa.Column('meta', JsonField(), nullable=True),
sa.Column('deployment_tags', postgresql.ARRAY(sa.String(length=64)), nullable=True),
sa.PrimaryKeyConstraint('id', 'cluster_id')
)
op.create_table('test_runs',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('cluster_id', sa.Integer(), nullable=False),
sa.Column('status', sa.Enum('running', 'finished', name='test_run_states'), nullable=False),
sa.Column('meta', JsonField(), nullable=True),
sa.Column('started_at', sa.DateTime(), nullable=True),
sa.Column('ended_at', sa.DateTime(), nullable=True),
sa.Column('test_set_id', sa.String(length=128), nullable=True),
sa.ForeignKeyConstraint(['test_set_id', 'cluster_id'], ['test_sets.id', 'test_sets.cluster_id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('tests',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('name', sa.String(length=512), nullable=True),
sa.Column('title', sa.String(length=512), nullable=True),
sa.Column('description', sa.Text(), nullable=True),
sa.Column('duration', sa.String(length=512), nullable=True),
sa.Column('message', sa.Text(), nullable=True),
sa.Column('traceback', sa.Text(), nullable=True),
sa.Column('status', sa.Enum('wait_running', 'running', 'failure', 'success', 'error', 'stopped', 'disabled', name='test_states'), nullable=True),
sa.Column('step', sa.Integer(), nullable=True),
sa.Column('time_taken', sa.Float(), nullable=True),
sa.Column('meta', JsonField(), nullable=True),
sa.Column('deployment_tags', postgresql.ARRAY(sa.String(length=64)), nullable=True),
sa.Column('cluster_id', sa.Integer(), nullable=False),
sa.Column('test_set_id', sa.String(length=128), nullable=True),
sa.Column('test_run_id', sa.Integer(), nullable=True),
sa.ForeignKeyConstraint(['test_run_id'], ['test_runs.id'], ondelete='CASCADE'),
sa.ForeignKeyConstraint(['test_set_id', 'cluster_id'], ['test_sets.id', 'test_sets.cluster_id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_table('tests')
op.drop_table('test_runs')
op.drop_table('test_sets')
### end Alembic commands ###

View File

@ -1,44 +0,0 @@
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Add started_at , ended_at on TestRun model
Revision ID: 4e9905279776
Revises: 12340edd992d
Create Date: 2013-07-04 12:10:49.219213
"""
# revision identifiers, used by Alembic.
revision = '4e9905279776'
down_revision = '12340edd992d'
from alembic import op
import sqlalchemy as sa
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.add_column('test_runs',
sa.Column('started_at', sa.DateTime(), nullable=True))
op.add_column('test_runs',
sa.Column('ended_at', sa.DateTime(), nullable=True))
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_column('test_runs', 'ended_at')
op.drop_column('test_runs', 'started_at')
### end Alembic commands ###

View File

@ -18,6 +18,8 @@ import sqlalchemy as sa
from sqlalchemy import desc
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import joinedload, relationship, object_mapper
from sqlalchemy.dialects.postgres import ARRAY
from fuel_plugin.ostf_adapter import nose_plugin
from fuel_plugin.ostf_adapter.storage import fields, engine
@ -41,10 +43,26 @@ class TestRun(BASE):
meta = sa.Column(fields.JsonField())
started_at = sa.Column(sa.DateTime, default=datetime.utcnow)
ended_at = sa.Column(sa.DateTime)
test_set_id = sa.Column(sa.String(128), sa.ForeignKey('test_sets.id'))
test_set_id = sa.Column(sa.String(128))
test_set = relationship('TestSet', backref='test_runs')
tests = relationship('Test', backref='test_run', order_by='Test.name')
tests = relationship(
'Test',
backref='test_run',
order_by='Test.name',
cascade='delete'
)
#following code defines proper foreign key
#contstraint for composite primary key
__table_args__ = (
sa.ForeignKeyConstraint(
['test_set_id', 'cluster_id'],
['test_sets.id', 'test_sets.cluster_id'],
ondelete='CASCADE'
),
{}
)
def update(self, session, status):
self.status = status
@ -78,9 +96,18 @@ class TestRun(BASE):
@classmethod
def add_test_run(cls, session, test_set, cluster_id, status='running',
tests=None):
'''
Creates new test_run object with given data
and makes copy of tests that will be bound
with this test_run. Copying is performed by
copy_test method of Test class.
'''
predefined_tests = tests or []
tests = session.query(Test).filter_by(
test_set_id=test_set, test_run_id=None)
tests = session.query(Test)\
.filter_by(test_set_id=test_set,
cluster_id=cluster_id,
test_run_id=None)
test_run = cls(test_set_id=test_set, cluster_id=cluster_id,
status=status)
session.add(test_run)
@ -123,17 +150,36 @@ class TestRun(BASE):
updated_data['status'] = status
if status in ['finished']:
updated_data['ended_at'] = datetime.utcnow()
session.query(cls). \
filter(cls.id == test_run_id). \
update(updated_data, synchronize_session=False)
@classmethod
def is_last_running(cls, session, test_set, cluster_id):
'''
Checks whether there one can perform creation of new
test_run by testing of existing of test_run object
with given data or test_run with 'finished' status.
'''
test_run = cls.get_last_test_run(session, test_set, cluster_id)
return not bool(test_run) or test_run.is_finished()
@classmethod
def start(cls, session, test_set, metadata, tests):
'''
Checks whether system must create new test_run or
not by calling is_last_running.
Creates new test_run if needed via
add_test_run function. Creation of new
test_run assumes not only adding test_run obj
but also copying of tests which is in relation
with test_set of created test_run.
Run tests from newly created test_run
via neded testing plugin.
'''
plugin = nose_plugin.get_plugin(test_set.driver)
if cls.is_last_running(session, test_set.id,
metadata['cluster_id']):
@ -178,23 +224,32 @@ class TestSet(BASE):
__tablename__ = 'test_sets'
id = sa.Column(sa.String(128), primary_key=True)
cluster_id = sa.Column(sa.Integer(), primary_key=True, autoincrement=False)
description = sa.Column(sa.String(256))
test_path = sa.Column(sa.String(256))
driver = sa.Column(sa.String(128))
additional_arguments = sa.Column(fields.ListField())
cleanup_path = sa.Column(sa.String(128))
meta = sa.Column(fields.JsonField())
deployment_tags = sa.Column(ARRAY(sa.String(64)))
tests = relationship('Test',
backref='test_set', order_by='Test.name')
tests = relationship(
'Test',
backref='test_set',
order_by='Test.name',
cascade='delete'
)
@property
def frontend(self):
return {'id': self.id, 'name': self.description}
@classmethod
def get_test_set(cls, session, test_set):
return session.query(cls).filter_by(id=test_set).first()
def get_test_set(cls, session, test_set, metadata):
return session.query(cls)\
.filter_by(id=test_set)\
.filter_by(cluster_id=metadata['cluster_id'])\
.first()
class Test(BASE):
@ -207,7 +262,8 @@ class Test(BASE):
'failure',
'success',
'error',
'stopped'
'stopped',
'disabled'
)
id = sa.Column(sa.Integer(), primary_key=True)
@ -221,9 +277,28 @@ class Test(BASE):
step = sa.Column(sa.Integer())
time_taken = sa.Column(sa.Float())
meta = sa.Column(fields.JsonField())
deployment_tags = sa.Column(ARRAY(sa.String(64)))
cluster_id = sa.Column(sa.Integer(), nullable=False)
test_set_id = sa.Column(sa.String(128))
test_set_id = sa.Column(sa.String(128), sa.ForeignKey('test_sets.id'))
test_run_id = sa.Column(sa.Integer(), sa.ForeignKey('test_runs.id'))
test_run_id = sa.Column(
sa.Integer(),
sa.ForeignKey(
'test_runs.id',
ondelete='CASCADE'
)
)
#following code defines proper foreign key
#contstraint for composite primary key
__table_args__ = (
sa.ForeignKeyConstraint(
['test_set_id', 'cluster_id'],
['test_sets.id', 'test_sets.cluster_id'],
ondelete='CASCADE'
),
{}
)
@property
def frontend(self):
@ -261,6 +336,10 @@ class Test(BASE):
update({'status': status}, synchronize_session=False)
def copy_test(self, test_run, predefined_tests):
'''
Performs copying of tests for newly created
test_run.
'''
new_test = self.__class__()
mapper = object_mapper(self)
primary_keys = set([col.key for col in mapper.primary_key])

View File

@ -17,9 +17,10 @@ import logging
from sqlalchemy import func
from sqlalchemy.orm import joinedload
from pecan import rest, expose, request
from fuel_plugin.ostf_adapter.storage import models
from fuel_plugin.ostf_adapter.wsgi.wsgi_utils import discovery_check
LOG = logging.getLogger(__name__)
@ -42,35 +43,33 @@ class BaseRestController(rest.RestController):
class TestsController(BaseRestController):
@expose('json')
def get_one(self, test_name):
raise NotImplementedError()
@expose('json')
def get_all(self):
def get(self, cluster):
discovery_check(cluster)
with request.session.begin(subtransactions=True):
tests = request.session.query(models.Test)\
.filter_by(cluster_id=cluster)\
.filter_by(test_run_id=None)\
.all()
return [item.frontend for item in tests]
if tests:
return [item.frontend for item in tests]
return {}
class TestsetsController(BaseRestController):
@expose('json')
def get_one(self, test_set):
def get(self, cluster):
discovery_check(cluster)
with request.session.begin(subtransactions=True):
test_set = request.session.query(models.TestSet)\
.filter_by(id=test_set).first()
if test_set and isinstance(test_set, models.TestSet):
return test_set.frontend
return {}
test_sets = request.session.query(models.TestSet)\
.filter_by(cluster_id=cluster)\
.all()
@expose('json')
def get_all(self):
with request.session.begin(subtransactions=True):
return [item.frontend for item
in request.session.query(models.TestSet).all()]
if test_sets:
return [item.frontend for item in test_sets]
return {}
class TestrunsController(BaseRestController):
@ -82,8 +81,9 @@ class TestrunsController(BaseRestController):
@expose('json')
def get_all(self):
with request.session.begin(subtransactions=True):
return [item.frontend for item
in request.session.query(models.TestRun).all()]
test_runs = request.session.query(models.TestRun).all()
return [item.frontend for item in test_runs]
@expose('json')
def get_one(self, test_run_id):
@ -98,11 +98,13 @@ class TestrunsController(BaseRestController):
def get_last(self, cluster_id):
with request.session.begin(subtransactions=True):
test_run_ids = request.session.query(func.max(models.TestRun.id)) \
.group_by(models.TestRun.test_set_id).\
filter_by(cluster_id=cluster_id)
test_runs = request.session.query(models.TestRun). \
options(joinedload('tests')). \
filter(models.TestRun.id.in_(test_run_ids))
.group_by(models.TestRun.test_set_id)\
.filter_by(cluster_id=cluster_id)
test_runs = request.session.query(models.TestRun)\
.options(joinedload('tests'))\
.filter(models.TestRun.id.in_(test_run_ids))
return [item.frontend for item in test_runs]
@expose('json')
@ -116,9 +118,18 @@ class TestrunsController(BaseRestController):
tests = test_run.get('tests', [])
test_set = models.TestSet.get_test_set(
request.session, test_set)
request.session,
test_set,
metadata
)
test_run = models.TestRun.start(
request.session, test_set, metadata, tests)
request.session,
test_set,
metadata,
tests
)
res.append(test_run)
return res

View File

@ -0,0 +1,90 @@
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import requests
from pecan import conf
from fuel_plugin.ostf_adapter.storage import engine, models
from fuel_plugin.ostf_adapter.nose_plugin import nose_discovery
CORE_PATH = conf.debug_tests if conf.get('debug_tests') else 'fuel_health'
def discovery_check(cluster):
nailgun_api_url = 'api/clusters/{}'.format(cluster)
cluster_meta = _request_to_nailgun(nailgun_api_url)
#at this moment we need following deployment
#arguments for cluster. The main inconvinience
#is that needed data is spreaded in cluster_meta
#dict which leads to such hoodoo
cluster_deployment_args = set(
[
cluster_meta['mode'],
cluster_meta['release']['operating_system']
]
)
cluster_data = {
'cluster_id': cluster,
'deployment_tags': cluster_deployment_args
}
session = engine.get_session()
with session.begin(subtransactions=True):
test_sets = session.query(models.TestSet)\
.filter_by(cluster_id=cluster)\
.all()
if not test_sets:
nose_discovery.discovery(
path=CORE_PATH,
deployment_info=cluster_data
)
else:
for testset in test_sets:
deployment_tags = testset.deployment_tags
deployment_tags = deployment_tags if deployment_tags else []
if not set(deployment_tags).issubset(
cluster_data['deployment_tags']
):
#perform cascade deletion of testset
#and corresponding to it tests and
#testruns with their tests too
session.query(models.TestSet)\
.filter_by(id=testset.id)\
.filter_by(cluster_id=testset.cluster_id)\
.delete()
#perform final discovery for tests
nose_discovery.discovery(
path=CORE_PATH,
deployment_info=cluster_data
)
def _request_to_nailgun(api_url):
nailgun_url = 'http://{0}:{1}/{2}'.format(
conf.nailgun.host,
conf.nailgun.port,
api_url
)
req_ses = requests.Session()
req_ses.trust_env = False
response = req_ses.get(nailgun_url)
return response.json()

View File

@ -24,19 +24,46 @@ class TestingAdapterClient(object):
def _request(self, method, url, data=None):
headers = {'content-type': 'application/json'}
r = requests.request(method, url, data=data, headers=headers, timeout=30.0)
if 2 != r.status_code/100:
raise AssertionError('{method} "{url}" responded with '
'"{code}" status code'.format(
method=method.upper(),
url=url, code=r.status_code))
r = requests.request(
method,
url,
data=data,
headers=headers,
timeout=30.0
)
if 2 != r.status_code / 100:
raise AssertionError(
'{method} "{url}" responded with '
'"{code}" status code'.format(
method=method.upper(),
url=url, code=r.status_code)
)
return r
def __getattr__(self, item):
getters = ['testsets', 'tests', 'testruns']
if item in getters:
url = ''.join([self.url, '/', item])
return lambda: self._request('GET', url)
#def __getattr__(self, item):
# getters = ['testsets', 'tests', 'testruns']
# if item in getters:
# url = ''.join([self.url, '/', item])
# return lambda: self._request('GET', url)
def testsets(self, cluster_id):
url = ''.join(
[self.url, '/testsets/', str(cluster_id)]
)
return self._request('GET', url)
def tests(self, cluster_id):
url = ''.join(
[self.url, '/tests/', str(cluster_id)]
)
return self._request('GET', url)
def testruns(self):
url = ''.join(
[self.url, '/testruns/']
)
return self._request('GET', url)
def testruns_last(self, cluster_id):
url = ''.join([self.url, '/testruns/last/',
@ -48,34 +75,50 @@ class TestingAdapterClient(object):
def start_testrun_tests(self, testset, tests, cluster_id):
url = ''.join([self.url, '/testruns'])
data = [{'testset': testset,
'tests': tests,
'metadata': {'cluster_id': str(cluster_id)}}]
data = [
{
'testset': testset,
'tests': tests,
'metadata': {'cluster_id': str(cluster_id)}
}
]
return self._request('POST', url, data=dumps(data))
def stop_testrun(self, testrun_id):
url = ''.join([self.url, '/testruns'])
data = [{"id": testrun_id,
"status": "stopped"}]
data = [
{
"id": testrun_id,
"status": "stopped"
}
]
return self._request("PUT", url, data=dumps(data))
def stop_testrun_last(self, testset, cluster_id):
latest = self.testruns_last(cluster_id).json()
testrun_id = [item['id'] for item in latest
if item['testset'] == testset][0]
testrun_id = [
item['id'] for item in latest
if item['testset'] == testset
][0]
return self.stop_testrun(testrun_id)
def restart_tests(self, tests, testrun_id):
url = ''.join([self.url, '/testruns'])
body = [{'id': str(testrun_id),
'tests': tests,
'status': 'restarted'}]
body = [
{
'id': str(testrun_id),
'tests': tests,
'status': 'restarted'
}
]
return self._request('PUT', url, data=dumps(body))
def restart_tests_last(self, testset, tests, cluster_id):
latest = self.testruns_last(cluster_id).json()
testrun_id = [item['id'] for item in latest
if item['testset'] == testset][0]
testrun_id = [
item['id'] for item in latest
if item['testset'] == testset
][0]
return self.restart_tests(tests, testrun_id)
def _with_timeout(self, action, testset, cluster_id,
@ -105,14 +148,22 @@ class TestingAdapterClient(object):
if polling_hook:
polling_hook(stopped_response)
stopped_response = self.testruns_last(cluster_id)
stopped_status = [item['status'] for item in stopped_response.json()
if item['testset'] == testset][0]
stopped_status = [
item['status'] for item in stopped_response.json()
if item['testset'] == testset
][0]
msg = '{0} is still in {1} state. Now the state is {2}'.format(
testset, current_status, stopped_status)
msg_tests = '\n'.join(['{0} -> {1}, {2}'.format(
item['id'], item['status'], item['taken'])
for item in current_tests])
msg_tests = '\n'.join(
[
'{0} -> {1}, {2}'.format(
item['id'], item['status'], item['taken']
)
for item in current_tests
]
)
raise AssertionError('\n'.join([msg, msg_tests]))
return current_response

View File

@ -34,23 +34,21 @@ class Response(object):
else:
self._parse_json(response.json())
self.request = '{0} {1} \n with {2}'\
.format(response.request.method, response.request.url, response.request.body)
.format(
response.request.method,
response.request.url,
response.request.body
)
def __getattr__(self, item):
if item in self.test_sets or item in self._tests:
return self.test_sets.get(item) or self._tests.get(item)
else:
return super(type(self), self).__delattr__(item)
if item in self.test_sets:
return self.test_sets.get(item)
def __str__(self):
if self.is_empty:
return "Empty"
return self.test_sets.__str__()
@classmethod
def set_test_name_mapping(cls, mapping):
cls.test_name_mapping = mapping
def _parse_json(self, json):
if json == [{}]:
self.is_empty = True
@ -60,12 +58,9 @@ class Response(object):
self.test_sets = {}
self._tests = {}
for testset in json:
self.test_sets[testset.pop('testset')] = testset
self._tests = dict((self._friendly_name(item.get('id')), item) for item in testset['tests'])
def _friendly_name(self, name):
return self.test_name_mapping.get(name, name)
class AdapterClientProxy(object):
@ -77,8 +72,6 @@ class AdapterClientProxy(object):
if item in TestingAdapterClient.__dict__:
call = getattr(self.client, item)
return self._decorate_call(call)
def _friendly_map(self, mapping):
Response.set_test_name_mapping(mapping)
def _decorate_call(self, call):
@wraps(call)
@ -88,8 +81,6 @@ class AdapterClientProxy(object):
return inner
class SubsetException(Exception):
pass
@ -101,34 +92,47 @@ class BaseAdapterTest(TestCase):
raise AssertionError(msg)
if not isinstance(comparable, Response):
comparable = Response(comparable)
test_set = comparable.test_sets.keys()[0]
test_set_data = comparable.test_sets[test_set]
tests = comparable._tests
diff = []
for item in test_set_data:
if item == 'tests':
continue
if response.test_sets[test_set][item] != test_set_data[item]:
msg = 'Actual "{0}" != expected "{1}" in {2}.{3}'.format(response.test_sets[test_set][item],
test_set_data[item], test_set, item)
diff.append(msg)
for test_set in comparable.test_sets.keys():
test_set_data = comparable.test_sets[test_set]
tests = test_set_data['tests']
diff = []
for test_name, test in tests.iteritems():
for t in test:
if t == 'id':
for item in test_set_data:
if item == 'tests':
continue
if response._tests[test_name][t] != test[t]:
msg = 'Actual "{0}" != expected"{1}" in {2}.{3}.{4}'.format(response._tests[test_name][t],
test[t], test_set, test_name, t)
if response.test_sets[test_set][item] != test_set_data[item]:
msg = 'Actual "{0}" != expected "{1}" in {2}.{3}'.format(
response.test_sets[test_set][item],
test_set_data[item],
test_set,
item
)
diff.append(msg)
if diff:
raise AssertionError(diff)
tests = dict([(test['id'], test) for test in tests])
response_tests = dict(
[
(test['id'], test) for test in
response.test_sets[test_set]['tests']
]
)
for test_id, test_data in tests.iteritems():
for data_key, data_value in test_data.iteritems():
if not response_tests[test_id][data_key] == data_value:
raise AssertionError(('excpected: test_set {0}, test_id {1} data_key {2}, data_value {3}...'
'got: {4}')
.format(
test_set,
test_id,
data_key,
data_value,
response_tests[test_id][data_key]
)
)
@staticmethod
def init_client(url, mapping):
def init_client(url):
ac = AdapterClientProxy(url)
ac._friendly_map(mapping)
return ac

View File

@ -0,0 +1,13 @@
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -0,0 +1,70 @@
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
__profile__ = {
"id": "ha_deployment_test",
"driver": "nose",
"test_path": "fuel_plugin/tests/functional/deployment_types_tests/ha_deployment.py",
"description": "Fake tests for HA deployment",
"deployment_tags": ["ha"]
}
import time
import httplib
import unittest
class HATest(unittest.TestCase):
def test_ha_rhel_depl(self):
"""fake empty test
This is fake tests for ha
rhel deployment
Duration: 0sec
Deployment tags: ha, rhel
"""
self.assertTrue(True)
def test_ha_rhel_quantum_depl(self):
"""fake empty test
This is a fake test for
ha rhel with quantum
Duration: 0sec
Deployment tags: ha, rhel, quantum
"""
self.assertTrue(True)
def test_ha_ubuntu_depl(self):
"""fake empty test
This is fake test for ha
ubuntu deployment
Deployment tags: ha, ubuntu
"""
self.assertTrue(True)
def test_ha_ubuntu_novanet_depl(self):
"""fake empty test
This is empty test for ha
ubuntu with nova-network
deployment
Deployment tags: ha, ubuntu, nova_network
"""
self.assertTrue(True)
def test_ha_depl(self):
"""fake empty test
This is empty test for any
ha deployment
"""
self.assertTrue(True)

View File

@ -0,0 +1,56 @@
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
__profile__ = {
"id": "multinode_deployment_test",
"driver": "nose",
"test_path": "fuel_plugin/tests/functional/deployment_types_tests/multinode_deployment.py",
"description": "Fake tests for multinode deployment on ubuntu",
"deployment_tags": ["multinode", "ubuntu"]
}
import time
import unittest
class MultinodeTest(unittest.TestCase):
def test_multi_novanet_depl(self):
"""fake empty test
This is fake empty test
for multinode on ubuntu with
nova-network deployment
Duration: 0sec
Deployment tags: multinode, ubuntu, nova_network
"""
self.assertTrue(True)
def test_multi_quantum_depl(self):
"""fake empty test
This is fake empty test
for multinode on ubuntu with
quatum deployment
Duration: 0sec
Deployment tags: multinode, ubuntu, quantum
"""
self.assertTrue(True)
def test_multi_depl(self):
"""fake empty test
This is fake empty test
for mutlinode on ubuntu
deployment
Duration: 1sec
"""
self.assertTrue(True)

View File

@ -16,7 +16,8 @@ __profile__ = {
"id": "general_test",
"driver": "nose",
"test_path": "fuel_plugin/tests/functional/dummy_tests/general_test.py",
"description": "General fake tests"
"description": "General fake tests",
"deployment_tags": []
}
import time

View File

@ -16,7 +16,8 @@ __profile__ = {
"id": "stopped_test",
"driver": "nose",
"test_path": "fuel_plugin/tests/functional/dummy_tests/stopped_test.py",
"description": "Long running 25 secs fake tests"
"description": "Long running 25 secs fake tests",
"deployment_tags": []
}
import time

View File

@ -13,13 +13,10 @@
# under the License.
import time
import mock
from fuel_plugin.tests.functional.base import BaseAdapterTest, Response
from fuel_plugin.ostf_client.client import TestingAdapterClient as adapter
from fuel_plugin.ostf_adapter.storage import engine, models
class AdapterTests(BaseAdapterTest):
@ -36,84 +33,181 @@ class AdapterTests(BaseAdapterTest):
'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fail_with_step': 'fail_step',
'fuel_plugin.tests.functional.dummy_tests.stopped_test.dummy_tests_stopped.test_really_long': 'really_long',
'fuel_plugin.tests.functional.dummy_tests.stopped_test.dummy_tests_stopped.test_not_long_at_all': 'not_long',
'fuel_plugin.tests.functional.dummy_tests.stopped_test.dummy_tests_stopped.test_one_no_so_long': 'so_long'
'fuel_plugin.tests.functional.dummy_tests.stopped_test.dummy_tests_stopped.test_one_no_so_long': 'so_long',
'fuel_plugin.tests.functional.dummy_tests.deployment_types_tests.ha_deployment_test.HATest.test_ha_depl': 'ha_depl',
'fuel_plugin.tests.functional.dummy_tests.deployment_types_tests.ha_deployment_test.HATest.test_ha_rhel_depl': 'ha_rhel_depl'
}
cls.testsets = {
# "fuel_smoke": None,
# "fuel_sanity": None,
"general_test": ['fast_pass', 'fast_error', 'fast_fail', 'long_pass'],
"stopped_test": ['really_long', 'not_long', 'so_long']
"ha_deployment_test": [],
"general_test": [
'fast_pass',
'fast_error',
'fast_fail',
'long_pass',
],
"stopped_test": [
'really_long',
'not_long',
'so_long'
]
}
cls.adapter = adapter(url)
cls.client = cls.init_client(url, cls.mapping)
def test_tests_list_with_no_testrun(self):
'''
Starts test_run to fill database with needed data.
Checks wheter test_controller returns only tests
with no test_run_id field
'''
testset = "general_test"
cluster_id = 1
self.client.start_testrun(testset, cluster_id)
r = self.adapter.tests().json()
self.assertEqual(len(r), len(self.mapping))
cls.client = cls.init_client(url)
def test_list_testsets(self):
"""Verify that self.testsets are in json response
"""
json = self.adapter.testsets().json()
cluster_id = 1
json = self.adapter.testsets(cluster_id).json()
response_testsets = [item['id'] for item in json]
for testset in self.testsets:
msg = '"{test}" not in "{response}"'.format(test=testset, response=response_testsets)
msg = '"{test}" not in "{response}"'.format(
test=testset,
response=response_testsets
)
self.assertTrue(testset in response_testsets, msg)
def test_list_tests(self):
"""Verify that self.tests are in json response
"""
json = self.adapter.tests().json()
cluster_id = 1
json = self.adapter.tests(cluster_id).json()
response_tests = [item['id'] for item in json]
for test in self.mapping:
msg = '"{test}" not in "{response}"'.format(test=test.capitalize(), response=response_tests)
for test in self.mapping.keys():
msg = '"{test}" not in "{response}"'.format(
test=test.capitalize(),
response=response_tests
)
self.assertTrue(test in response_tests, msg)
def test_run_testset(self):
"""Verify that test status changes in time from running to success
"""
testset = "general_test"
testsets = ["general_test", "stopped_test"]
cluster_id = 1
self.client.start_testrun(testset, cluster_id)
time.sleep(3)
#make sure we have data about test_sets in db
self.adapter.testsets(cluster_id)
for testset in testsets:
self.client.start_testrun(testset, cluster_id)
time.sleep(5)
r = self.client.testruns_last(cluster_id)
assertions = Response([{'status': 'running',
'testset': 'general_test',
'tests': [
{'id': 'fast_pass', 'status': 'success', 'name': 'fast pass test',
'description': """ This is a simple always pass test
""",},
{'id': 'long_pass', 'status': 'running'},
{'id': 'fail_step', 'message': 'Fake fail message', 'status': 'failure'},
{'id': 'fast_error', 'message': '', 'status': 'error'},
{'id': 'fast_fail', 'message': 'Something goes wroooong', 'status': 'failure'}]}])
print r
print assertions
assertions = Response(
[
{
'testset': 'general_test',
'status': 'running',
'tests': [
{
'status': 'failure',
'testset': 'general_test',
'name': 'Fast fail with step',
'message': 'Fake fail message',
'id': 'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fail_with_step',
'description': ' '
},
{
'status': 'error',
'testset': 'general_test',
'name': 'And fast error',
'message': '',
'id': 'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_error',
'description': ' '
},
{
'status': 'failure',
'testset': 'general_test',
'name': 'Fast fail',
'message': 'Something goes wroooong',
'id': u'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_fail',
'description': ' '
},
{
'status': 'success',
'testset': 'general_test',
'name': 'fast pass test',
'duration': '1sec',
'message': '',
'id': 'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_pass',
'description': ' This is a simple always pass test\n '
},
{
'status': 'running',
'testset': 'general_test',
'name': 'Will sleep 5 sec',
'duration': '5sec',
'message': '',
'id': 'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_long_pass',
'description': ' This is a simple test\n it will run for 5 sec\n '
}
],
'meta': None,
'cluster_id': 1,
},
{
'testset': 'stopped_test',
'status': 'running',
'tests': [
{
'status': 'success',
'testset': 'stopped_testu',
'name': 'You know.. for utesting',
'duration': '1sec',
'message': '',
'id': 'fuel_plugin.tests.functional.dummy_tests.stopped_test.dummy_tests_stopped.test_not_long_at_all',
'description': ' '
},
{
'status': 'running',
'testset': 'stopped_test',
'name': 'What i am doing here? You ask me????',
'duration': None,
'message': '',
'id': 'fuel_plugin.tests.functional.dummy_tests.stopped_test.dummy_tests_stopped.test_one_no_so_long',
'description': ' '
},
{
'status': 'wait_running',
'testset': 'stopped_test',
'name': 'This is long running tests',
'duration': '25sec',
'message': None,
'id': 'fuel_plugin.tests.functional.dummy_tests.stopped_test.dummy_tests_stopped.test_really_long',
'description': ' '
}
],
'meta': None,
'cluster_id': 1,
}
]
)
self.compare(r, assertions)
time.sleep(10)
time.sleep(30)
r = self.client.testruns_last(cluster_id)
assertions.general_test['status'] = 'finished'
assertions.long_pass['status'] = 'success'
assertions.stopped_test['status'] = 'finished'
for test in assertions.general_test['tests']:
if test['name'] == 'Will sleep 5 sec':
test['status'] = 'success'
for test in assertions.stopped_test['tests']:
if test['name'] == 'This is long running tests':
test['status'] = 'success'
test['message'] = ''
if test['name'] == 'What i am doing here? You ask me????':
test['status'] = 'success'
self.compare(r, assertions)
@ -121,18 +215,54 @@ class AdapterTests(BaseAdapterTest):
"""Verify that long running testrun can be stopped
"""
testset = "stopped_test"
cluster_id = 2
cluster_id = 1
#make sure we have all needed data in db
#for this test case
self.adapter.testsets(cluster_id)
self.client.start_testrun(testset, cluster_id)
time.sleep(10)
time.sleep(20)
r = self.client.testruns_last(cluster_id)
assertions = Response([
{'status': 'running',
{
'testset': 'stopped_test',
'status': 'running',
'tests': [
{'id': 'not_long', 'status': 'success'},
{'id': 'so_long', 'status': 'success'},
{'id': 'really_long', 'status': 'running'}]}])
{
'status': 'success',
'testset': 'stopped_test',
'name': 'You know.. for testing',
'duration': '1sec',
'message': '',
'id': 'fuel_plugin.tests.functional.dummy_tests.stopped_test.dummy_tests_stopped.test_not_long_at_all',
'description': ' '
},
{
'status': 'success',
'testset': 'stopped_test',
'name': 'What i am doing here? You ask me????',
'duration': None,
'message': '',
'id': 'fuel_plugin.tests.functional.dummy_tests.stopped_test.dummy_tests_stopped.test_one_no_so_long',
'description': ' '
},
{
'status': 'running',
'testset': 'stopped_test',
'name': 'This is long running tests',
'duration': '25sec',
'message': '',
'id': 'fuel_plugin.tests.functional.dummy_tests.stopped_test.dummy_tests_stopped.test_really_long',
'description': ' '
}
],
'meta': None,
'cluster_id': 1
}
])
self.compare(r, assertions)
@ -140,14 +270,21 @@ class AdapterTests(BaseAdapterTest):
r = self.client.testruns_last(cluster_id)
assertions.stopped_test['status'] = 'finished'
assertions.really_long['status'] = 'stopped'
for test in assertions.stopped_test['tests']:
if test['name'] == 'This is long running tests':
test['status'] = 'stopped'
self.compare(r, assertions)
def test_cant_start_while_running(self):
"""Verify that you can't start new testrun for the same cluster_id while previous run is running"""
testsets = {"stopped_test": None,
"general_test": None}
cluster_id = 3
"""Verify that you can't start new testrun
for the same cluster_id while previous run
is running
"""
testsets = {
"stopped_test": None,
"general_test": None
}
cluster_id = 1
for testset in testsets:
self.client.start_testrun(testset, cluster_id)
@ -162,105 +299,221 @@ class AdapterTests(BaseAdapterTest):
self.assertTrue(r.is_empty, msg)
def test_start_many_runs(self):
"""Verify that you can start 20 testruns in a row with different cluster_id"""
"""Verify that you can start more than one
testruns in a row with different cluster_id
"""
testset = "general_test"
for cluster_id in range(100, 105):
for cluster_id in range(1, 2):
r = self.client.start_testrun(testset, cluster_id)
msg = '{0} was empty'.format(r.request)
self.assertFalse(r.is_empty, msg)
'''TODO: Rewrite assertions to verity that all 5 testruns ended with appropriate status'''
'''TODO: Rewrite assertions to verity that all
5 testruns ended with appropriate status
'''
def test_run_single_test(self):
"""Verify that you can run individual tests from given testset"""
testset = "general_test"
tests = ['fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_pass',
'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_fail']
cluster_id = 50
tests = [
'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_pass',
'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_fail'
]
cluster_id = 1
#make sure that we have all needed data in db
self.adapter.testsets(cluster_id)
r = self.client.start_testrun_tests(testset, tests, cluster_id)
assertions = Response([
{'status': 'running',
'testset': 'general_test',
'tests': [
{'status': 'disabled', 'id': 'fast_error'},
{'status': 'wait_running', 'id': 'fast_fail'},
{'status': 'wait_running', 'id': 'fast_pass'},
{'status': 'disabled', 'id': 'long_pass'}]}])
{
'testset': 'general_test',
'status': 'running',
'tests': [
{
'status': 'disabled',
'name': 'Fast fail with step',
'id': 'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fail_with_step',
},
{
'status': 'disabled',
'name': 'And fast error',
'id': 'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_error',
},
{
'status': 'wait_running',
'name': 'Fast fail',
'id': 'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_fail',
},
{
'status': 'wait_running',
'name': 'fast pass test',
'id': 'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_pass',
},
{
'status': 'disabled',
'name': 'Will sleep 5 sec',
'id': 'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_long_pass',
}
],
'cluster_id': '1',
}
])
self.compare(r, assertions)
time.sleep(2)
r = self.client.testruns_last(cluster_id)
assertions.general_test['status'] = 'finished'
assertions.fast_fail['status'] = 'failure'
assertions.fast_pass['status'] = 'success'
for test in assertions.general_test['tests']:
if test['name'] == 'Fast fail':
test['status'] = 'failure'
elif test['name'] == 'fast pass test':
test['status'] = 'success'
self.compare(r, assertions)
def test_single_test_restart(self):
"""Verify that you restart individual tests for given testrun"""
testset = "general_test"
tests = ['fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_pass',
'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_fail']
cluster_id = 60
tests = [
'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_pass',
'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_fail'
]
cluster_id = 1
#make sure we have all needed data in db
self.adapter.testsets(cluster_id)
self.client.run_testset_with_timeout(testset, cluster_id, 10)
r = self.client.restart_tests_last(testset, tests, cluster_id)
assertions = Response([
{'status': 'running',
{
'testset': 'general_test',
'status': 'running',
'tests': [
{'id': 'fast_pass', 'status': 'wait_running'},
{'id': 'long_pass', 'status': 'success'},
{'id': 'fast_error', 'status': 'error'},
{'id': 'fast_fail', 'status': 'wait_running'}]}])
{
'status': 'failure',
'name': 'Fast fail with step',
'id': 'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fail_with_step',
},
{
'status': 'error',
'name': 'And fast error',
'id': 'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_error',
},
{
'status': 'wait_running',
'name': 'Fast fail',
'id': 'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_fail',
},
{
'status': 'wait_running',
'name': 'fast pass test',
'id': 'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_pass',
},
{
'status': 'success',
'name': 'Will sleep 5 sec',
'id': 'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_long_pass',
}
],
'cluster_id': '1',
}
])
self.compare(r, assertions)
time.sleep(5)
r = self.client.testruns_last(cluster_id)
assertions.general_test['status'] = 'finished'
assertions.fast_pass['status'] = 'success'
assertions.fast_fail['status'] = 'failure'
assertions.general_test['status'] = 'finished'
for test in assertions.general_test['tests']:
if test['name'] == 'Fast fail':
test['status'] = 'failure'
elif test['name'] == 'fast pass test':
test['status'] = 'success'
self.compare(r, assertions)
def test_restart_combinations(self):
"""Verify that you can restart both tests that ran and did not run during single test start"""
"""Verify that you can restart both tests that
ran and did not run during single test start"""
testset = "general_test"
tests = ['fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_pass',
'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_fail']
tests = [
'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_pass',
'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_fail'
]
disabled_test = ['fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_error', ]
cluster_id = 70
cluster_id = 1
#make sure we have all needed data in db
self.adapter.testsets(cluster_id)
self.client.run_with_timeout(testset, tests, cluster_id, 70)
self.client.restart_with_timeout(testset, tests, cluster_id, 10)
r = self.client.restart_tests_last(testset, disabled_test, cluster_id)
assertions = Response([
{'status': 'running',
'testset': 'general_test',
'tests': [
{'status': 'wait_running', 'id': 'fast_error'},
{'status': 'failure', 'id': 'fast_fail'},
{'status': 'success', 'id': 'fast_pass'},
{'status': 'disabled', 'id': 'long_pass'}]}])
{
'testset': 'general_test',
'status': 'running',
'tests': [
{
'status': 'disabled',
'name': 'Fast fail with step',
'id': 'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fail_with_step',
},
{
'status': 'wait_running',
'name': 'And fast error',
'id': 'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_error',
},
{
'status': 'failure',
'name': 'Fast fail',
'id': 'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_fail',
},
{
'status': 'success',
'name': 'fast pass test',
'id': 'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_pass',
},
{
'status': 'disabled',
'name': 'Will sleep 5 sec',
'id': 'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_long_pass',
}
],
'cluster_id': '1',
}
])
self.compare(r, assertions)
time.sleep(5)
r = self.client.testruns_last(cluster_id)
assertions.general_test['status'] = 'finished'
assertions.fast_error['status'] = 'error'
for test in assertions.general_test['tests']:
if test['name'] == 'And fast error':
test['status'] = 'error'
self.compare(r, assertions)
def test_cant_restart_during_run(self):
testset = 'general_test'
tests = ['fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_pass',
'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_fail',
'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_pass']
cluster_id = 999
tests = [
'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_pass',
'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_fail',
'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_pass'
]
cluster_id = 1
#make sure that we have all needen data in db
self.adapter.testsets(cluster_id)
self.client.start_testrun(testset, cluster_id)
time.sleep(2)

View File

@ -0,0 +1,39 @@
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from pecan import conf
from bottle import route, run
cluster_fixture = {
1: {
'mode': 'ha',
'release': {
'operating_system': 'rhel'
}
},
2: {
'mode': 'multinode',
'release': {
'operating_system': 'ubuntu'
}
}
}
@route('/api/clusters/<id:int>')
def serve_cluster_meta(id):
return cluster_fixture[id]
run(host='localhost', port=8888, debug=True)

View File

@ -10,4 +10,4 @@
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# under the License.

View File

@ -13,100 +13,196 @@
# under the License.
import unittest2
from mock import patch
from mock import patch, Mock
from sqlalchemy.orm import sessionmaker
from fuel_plugin.ostf_adapter.nose_plugin import nose_discovery
from fuel_plugin.ostf_adapter.storage import models
from fuel_plugin.ostf_adapter.storage import engine
stopped__profile__ = {
"id": "stopped_test",
"driver": "nose",
"test_path": "fuel_plugin/tests/functional/dummy_tests/stopped_test.py",
"description": "Long running 25 secs fake tests"
}
general__profile__ = {
"id": "general_test",
"driver": "nose",
"test_path": "fuel_plugin/tests/functional/dummy_tests/general_test.py",
"description": "General fake tests"
}
class BaseTestNoseDiscovery(unittest2.TestCase):
'''
All test writing to database is wrapped in
non-ORM transaction which is created in
test_case setUp method and rollbacked in
tearDown, so that keep prodaction base clean
'''
@classmethod
def setUpClass(cls):
cls._mocked_pecan_conf = Mock()
cls._mocked_pecan_conf.dbpath = \
'postgresql+psycopg2://ostf:ostf@localhost/ostf'
@patch('fuel_plugin.ostf_adapter.nose_plugin.nose_discovery.engine')
class TestNoseDiscovery(unittest2.TestCase):
cls.Session = sessionmaker()
with patch(
'fuel_plugin.ostf_adapter.storage.engine.conf',
cls._mocked_pecan_conf
):
cls.engine = engine.get_engine()
def setUp(self):
self.fixtures = [models.TestSet(**general__profile__),
models.TestSet(**stopped__profile__)]
#database transaction wrapping
connection = self.engine.connect()
self.trans = connection.begin()
self.fixtures_iter = iter(self.fixtures)
self.Session.configure(bind=connection)
self.session = self.Session(bind=connection)
def test_discovery(self, engine):
engine.get_session().merge.side_effect = \
lambda *args, **kwargs: self.fixtures_iter.next()
#test_case level patching
self.mocked_get_session = lambda *args: self.session
nose_discovery.discovery(
path='fuel_plugin/tests/functional/dummy_tests'
self.session_patcher = patch(
'fuel_plugin.ostf_adapter.nose_plugin.nose_discovery.engine.get_session',
self.mocked_get_session
)
self.session_patcher.start()
self.assertEqual(engine.get_session().merge.call_count, 2)
def test_get_proper_description(self, engine):
'''
Checks whether retrived docsctrings from tests
are correct (in this occasion -- full).
Magic that is used here is based on using
data that is stored deeply in passed to test
method mock object.
'''
#etalon data is list of docstrings of tests
#of particular test set
expected = {
'title': 'fast pass test',
'name':
'fuel_plugin.tests.functional.dummy_tests.general_test.Dummy_test.test_fast_pass',
'duration': '1sec',
'description':
' This is a simple always pass test\n '
self.fixtures = {
'ha_deployment_test': {
'cluster_id': 1,
'deployment_tags': {
'ha',
'rhel'
}
},
'multinode_deployment_test': {
'cluster_id': 2,
'deployment_tags': {
'multinode',
'ubuntu'
}
}
}
#mocking behaviour of afterImport hook from DiscoveryPlugin
#so that another hook -- addSuccess could process data properly
engine.get_session().merge = lambda arg: arg
def tearDown(self):
#end patching
self.session_patcher.stop()
#following code provide mocking logic for
#addSuccess hook from DiscoveryPlugin that
#(mentioned logic) in turn allows us to
#capture data about test object that are processed
engine.get_session()\
.query()\
.filter_by()\
.update\
.return_value = None
#unwrapping
self.trans.rollback()
self.session.close()
class TestNoseDiscovery(BaseTestNoseDiscovery):
@classmethod
def setUpClass(cls):
super(TestNoseDiscovery, cls).setUpClass()
def setUp(self):
super(TestNoseDiscovery, self).setUp()
def tearDown(self):
super(TestNoseDiscovery, self).tearDown()
def test_discovery_testsets(self):
expected = {
'id': 'ha_deployment_test',
'cluster_id': 1,
'deployment_tags': ['ha']
}
nose_discovery.discovery(
path='fuel_plugin/tests/functional/dummy_tests'
path='fuel_plugin.tests.functional.dummy_tests.deployment_types_tests.ha_deployment_test',
deployment_info=self.fixtures['ha_deployment_test']
)
#now we can refer to captured test objects (not test_sets) in order to
#make test comparison against etalon
test_obj_to_compare = [
call[0][0] for call in engine.get_session().add.call_args_list
if (
isinstance(call[0][0], models.Test)
and
call[0][0].name.rsplit('.')[-1] == 'test_fast_pass'
test_set = self.session.query(models.TestSet)\
.filter_by(id=expected['id'])\
.filter_by(cluster_id=expected['cluster_id'])\
.one()
self.assertEqual(
test_set.deployment_tags,
expected['deployment_tags']
)
def test_discovery_tests(self):
expected = {
'test_set_id': 'ha_deployment_test',
'cluster_id': 1,
'results_count': 2,
'results_data': {
'names': [
'fuel_plugin.tests.functional.dummy_tests.deployment_types_tests.ha_deployment_test.HATest.test_ha_rhel_depl',
'fuel_plugin.tests.functional.dummy_tests.deployment_types_tests.ha_deployment_test.HATest.test_ha_depl'
]
}
}
nose_discovery.discovery(
path='fuel_plugin.tests.functional.dummy_tests.deployment_types_tests.ha_deployment_test',
deployment_info=self.fixtures['ha_deployment_test']
)
tests = self.session.query(models.Test)\
.filter_by(test_set_id=expected['test_set_id'])\
.filter_by(cluster_id=expected['cluster_id'])\
.all()
self.assertTrue(len(tests) == expected['results_count'])
for test in tests:
self.assertTrue(test.name in expected['results_data']['names'])
self.assertTrue(
set(test.deployment_tags)
.issubset(self.fixtures['ha_deployment_test']['deployment_tags'])
)
][0]
def test_get_proper_description(self):
expected = {
'title': 'fake empty test',
'name':
'fuel_plugin.tests.functional.dummy_tests.deployment_types_tests.ha_deployment_test.HATest.test_ha_rhel_depl',
'duration': '0sec',
'test_set_id': 'ha_deployment_test',
'cluster_id': self.fixtures['ha_deployment_test']['cluster_id'],
'deployment_tags': ['ha', 'rhel']
}
nose_discovery.discovery(
path='fuel_plugin.tests.functional.dummy_tests.deployment_types_tests.ha_deployment_test',
deployment_info=self.fixtures['ha_deployment_test']
)
test = self.session.query(models.Test)\
.filter_by(name=expected['name'])\
.filter_by(cluster_id=expected['cluster_id'])\
.filter_by(test_set_id=expected['test_set_id'])\
.one()
self.assertTrue(
all(
[
expected[key] == test_obj_to_compare.__dict__[key]
expected[key] == getattr(test, key)
for key in expected.keys()
]
)
)
@unittest2.skip("Not needed here")
class TestNoseDiscoveryRedeployedCluster(BaseTestNoseDiscovery):
@classmethod
def setUpClass(cls):
super(TestNoseDiscoveryRedeployedCluster, cls).setUpClass()
def setUp(self):
super(TestNoseDiscoveryRedeployedCluster, self).setUp()
#make fixture writing to db by calling
#discovery for cluster
nose_discovery.discovery(
path='fuel_plugin.tests.functional.dummy_tests.deployment_types_tests.ha_deployment_test',
deployment_info=self.fixtures['ha_deployment_test']
)
def tearDown(self):
super(TestNoseDiscoveryRedeployedCluster, self).tearDown()
def test_rediscover_testset(self):
pass

View File

@ -16,110 +16,487 @@ import json
from mock import patch, MagicMock
import unittest2
from sqlalchemy.orm import sessionmaker
from fuel_plugin.ostf_adapter.wsgi import controllers
from fuel_plugin.ostf_adapter.storage import models
from fuel_plugin.ostf_adapter.storage import models, engine
from fuel_plugin.ostf_adapter.nose_plugin.nose_discovery import discovery
@patch('fuel_plugin.ostf_adapter.wsgi.controllers.request')
class TestTestsController(unittest2.TestCase):
TEST_PATH = \
'fuel_plugin/tests/functional/dummy_tests/deployment_types_tests'
class BaseTestController(unittest2.TestCase):
@classmethod
def setUpClass(cls):
cls._mocked_pecan_conf = MagicMock()
cls._mocked_pecan_conf.dbpath = \
'postgresql+psycopg2://ostf:ostf@localhost/ostf'
cls.Session = sessionmaker()
with patch(
'fuel_plugin.ostf_adapter.storage.engine.conf',
cls._mocked_pecan_conf
):
cls.engine = engine.get_engine()
def setUp(self):
self.fixtures = [models.Test(), models.Test()]
#orm session wrapping
connection = self.engine.connect()
self.trans = connection.begin()
self.Session.configure(bind=connection)
self.session = self.Session()
#test case level patching
#mocking
#request mocking
self.request_mock = MagicMock()
self.request_patcher = patch(
'fuel_plugin.ostf_adapter.wsgi.controllers.request',
self.request_mock
)
self.request_patcher.start()
#pecan conf mocking
self.pecan_conf_mock = MagicMock()
self.pecan_conf_mock.nailgun.host = '127.0.0.1'
self.pecan_conf_mock.nailgun.port = 8888
#engine.get_session mocking
self.request_mock.session = self.session
self.session_getter_patcher = patch(
'fuel_plugin.ostf_adapter.storage.engine.get_session',
lambda *args: self.session
)
self.session_getter_patcher.start()
def tearDown(self):
#rollback changes to database
#made by tests
self.trans.rollback()
self.session.close()
#end of test_case patching
self.request_patcher.stop()
self.session_getter_patcher.stop()
class TestTestsController(BaseTestController):
@classmethod
def setUpClass(cls):
super(TestTestsController, cls).setUpClass()
def setUp(self):
super(TestTestsController, self).setUp()
self.controller = controllers.TestsController()
def test_get_all(self, request):
request.session.query().filter_by().all.return_value = self.fixtures
request.session.query().all.return_value = self.fixtures
res = self.controller.get_all()
self.assertEqual(res, [f.frontend for f in self.fixtures])
def test_get(self):
expected = {
'cluster_id': 1,
'frontend': [
{
'status': None,
'taken': None,
'step': None,
'testset': u'ha_deployment_test',
'name': u'fake empty test',
'duration': None,
'message': None,
'id': u'fuel_plugin.tests.functional.dummy_tests.deployment_types_tests.ha_deployment_test.HATest.test_ha_depl',
'description': u' This is empty test for any\n ha deployment\n Deployment tags:\n ',
},
{
'status': None,
'taken': None,
'step': None,
'testset': u'ha_deployment_test',
'name': u'fake empty test',
'duration': u'0sec',
'message': None,
'id': u'fuel_plugin.tests.functional.dummy_tests.deployment_types_tests.ha_deployment_test.HATest.test_ha_rhel_depl',
'description': u' This is fake tests for ha\n rhel deployment\n '
}
]
}
@patch('fuel_plugin.ostf_adapter.wsgi.controllers.request')
class TestTestSetsController(unittest2.TestCase):
#patch CORE_PATH from nose_discovery in order
#to process only testing data
#haven't found more beautiful way to mock
#discovery function in wsgi_utils
def discovery_mock(**kwargs):
kwargs['path'] = TEST_PATH
return discovery(**kwargs)
with patch(
'fuel_plugin.ostf_adapter.wsgi.wsgi_utils.nose_discovery.discovery',
discovery_mock
):
with patch(
'fuel_plugin.ostf_adapter.wsgi.wsgi_utils.conf',
self.pecan_conf_mock
):
res = self.controller.get(expected['cluster_id'])
self.assertEqual(res, expected['frontend'])
class TestTestSetsController(BaseTestController):
@classmethod
def setUpClass(cls):
super(TestTestSetsController, cls).setUpClass()
def setUp(self):
self.fixtures = [models.TestSet(), models.TestSet()]
super(TestTestSetsController, self).setUp()
self.controller = controllers.TestsetsController()
def test_get_all(self, request):
request.session.query().all.return_value = self.fixtures
res = self.controller.get_all()
self.assertEqual(res, [f.frontend for f in self.fixtures])
def tearDown(self):
super(TestTestSetsController, self).tearDown()
def test_get(self):
expected = {
'cluster_id': 1,
'frontend': [
{
'id': 'ha_deployment_test',
'name': 'Fake tests for HA deployment'
}
]
}
#patch CORE_PATH from nose_discovery in order
#to process only testing data
#haven't found more beautiful way to mock
#discovery function in wsgi_utils
def discovery_mock(**kwargs):
kwargs['path'] = TEST_PATH
return discovery(**kwargs)
with patch(
'fuel_plugin.ostf_adapter.wsgi.wsgi_utils.nose_discovery.discovery',
discovery_mock
):
with patch(
'fuel_plugin.ostf_adapter.wsgi.wsgi_utils.conf',
self.pecan_conf_mock
):
res = self.controller.get(expected['cluster_id'])
self.assertEqual(res, expected['frontend'])
@patch('fuel_plugin.ostf_adapter.wsgi.controllers.request')
class TestTestRunsController(unittest2.TestCase):
class TestTestRunsController(BaseTestController):
@classmethod
def setUpClass(cls):
super(TestTestRunsController, cls).setUpClass()
def setUp(self):
self.fixtures = [models.TestRun(status='finished'),
models.TestRun(status='running')]
self.fixtures[0].test_set = models.TestSet(driver='nose')
self.storage = MagicMock()
self.plugin = MagicMock()
self.session = MagicMock()
super(TestTestRunsController, self).setUp()
#test_runs depends on tests and test_sets data
#in database so we must execute discovery function
#in setUp in order to provide this data
depl_info = {
'cluster_id': 1,
'deployment_tags': {
'ha',
'rhel'
}
}
discovery(deployment_info=depl_info, path=TEST_PATH)
self.testruns = [
{
'testset': 'ha_deployment_test',
'metadata': {'cluster_id': 1}
}
]
self.controller = controllers.TestrunsController()
def test_get_all(self, request):
request.session.query().all.return_value = self.fixtures
res = self.controller.get_all()
self.assertEqual(res, [f.frontend for f in self.fixtures])
def tearDown(self):
super(TestTestRunsController, self).tearDown()
def test_get_one(self, request):
request.session.query().filter_by().first.return_value = \
self.fixtures[0]
res = self.controller.get_one(1)
self.assertEqual(res, self.fixtures[0].frontend)
@patch('fuel_plugin.ostf_adapter.wsgi.controllers.models')
def test_post(self, models, request):
request.storage = self.storage
testruns = [
{'testset': 'test_simple',
'metadata': {'cluster_id': 3}
},
{'testset': 'test_simple',
'metadata': {'cluster_id': 4}
}]
request.body = json.dumps(testruns)
fixtures_iterable = (f.frontend for f in self.fixtures)
class TestTestRunsPostController(TestTestRunsController):
models.TestRun.start.side_effect = \
lambda *args, **kwargs: fixtures_iterable.next()
res = self.controller.post()
self.assertEqual(res, [f.frontend for f in self.fixtures])
@classmethod
def setUpClass(cls):
super(TestTestRunsController, cls).setUpClass()
@patch('fuel_plugin.ostf_adapter.wsgi.controllers.models')
def test_put_stopped(self, models, request):
request.storage = self.storage
testruns = [
{'id': 1,
'metadata': {'cluster_id': 4},
'status': 'stopped'
}]
request.body = json.dumps(testruns)
def setUp(self):
super(TestTestRunsPostController, self).setUp()
models.TestRun.get_test_run().stop.side_effect = \
lambda *args, **kwargs: self.fixtures[0].frontend
res = self.controller.put()
self.assertEqual(res, [self.fixtures[0].frontend])
def tearDown(self):
super(TestTestRunsPostController, self).tearDown()
@patch('fuel_plugin.ostf_adapter.wsgi.controllers.models')
def test_put_restarted(self, models, request):
request.storage = self.storage
testruns = [
{'id': 1,
'metadata': {'cluster_id': 4},
'status': 'restarted'
}]
request.body = json.dumps(testruns)
def test_post(self):
expected = {
'testset': 'ha_deployment_test',
'status': 'running',
'cluster_id': 1,
'tests': {
'names': [
u'fuel_plugin.tests.functional.dummy_tests.deployment_types_tests.ha_deployment_test.HATest.test_ha_depl',
u'fuel_plugin.tests.functional.dummy_tests.deployment_types_tests.ha_deployment_test.HATest.test_ha_rhel_depl'
]
}
}
models.TestRun.get_test_run().restart.side_effect = \
lambda *args, **kwargs: self.fixtures[0].frontend
res = self.controller.put()
self.assertEqual(res, [self.fixtures[0].frontend])
with patch(
'fuel_plugin.ostf_adapter.wsgi.controllers.request.body',
json.dumps(self.testruns)
):
res = self.controller.post()[0]
def test_get_last(self, request):
cluster_id = 1
request.session.query().group_by().filter_by.return_value = [10, 11]
request.session.query().options().filter.return_value = self.fixtures
res = self.controller.get_last(cluster_id)
self.assertEqual(res, [f.frontend for f in self.fixtures])
#checking wheter controller is working properly
#by testing its blackbox behaviour
for key in expected.keys():
if key == 'tests':
self.assertTrue(
set(expected[key]['names']) ==
set([test['id'] for test in res[key]])
)
else:
self.assertTrue(expected[key] == res[key])
#checking wheter all necessary writing to database
#has been performed
test_run = self.session.query(models.TestRun)\
.filter_by(test_set_id=expected['testset'])\
.filter_by(cluster_id=expected['cluster_id'])\
.first()
self.assertTrue(test_run)
testrun_tests = self.session.query(models.Test)\
.filter(models.Test.test_run_id != None)\
.all()
tests_names = [
test.name for test in testrun_tests
]
self.assertTrue(set(tests_names) == set(expected['tests']['names']))
self.assertTrue(
all(
[test.status == 'wait_running' for test in testrun_tests]
)
)
#TODO: fix this test
class TestTestRunsPutController(TestTestRunsController):
@classmethod
def setUpClass(cls):
super(TestTestRunsPutController, cls).setUpClass()
def setUp(self):
super(TestTestRunsPutController, self).setUp()
self.nose_adapter_session_patcher = patch(
'fuel_plugin.ostf_adapter.nose_plugin.nose_adapter.engine.get_session',
lambda *args: self.session
)
self.nose_adapter_session_patcher.start()
#this test_case needs data on particular test_run
#already present in database. That is suppotred by
#following code
with patch(
'fuel_plugin.ostf_adapter.wsgi.controllers.request.body',
json.dumps(self.testruns)
):
self.stored_test_run = self.controller.post()[0]
def tearDown(self):
self.nose_adapter_session_patcher.stop()
super(TestTestRunsPutController, self).tearDown()
def test_put_stopped(self):
expected = {
'id': int(self.stored_test_run['id']),
'testset': self.stored_test_run['testset'],
'status': 'running', # seems like incorrect !!!!!!!!!
'cluster_id': 1,
'tests': {
'names': [
u'fuel_plugin.tests.functional.dummy_tests.deployment_types_tests.ha_deployment_test.HATest.test_ha_depl',
u'fuel_plugin.tests.functional.dummy_tests.deployment_types_tests.ha_deployment_test.HATest.test_ha_rhel_depl'
]
}
}
testruns_to_stop = [
{
'id': int(self.stored_test_run['id']),
'metadata': {'cluster_id': int(self.stored_test_run['cluster_id'])},
'status': 'stopped'
}
]
with patch(
'fuel_plugin.ostf_adapter.wsgi.controllers.request.body',
json.dumps(testruns_to_stop)
):
res = self.controller.put()[0]
#checking wheter controller is working properly
#by testing its blackbox behaviour
for key in expected.keys():
if key == 'tests':
self.assertTrue(
set(expected[key]['names']) ==
set([test['id'] for test in res[key]])
)
else:
self.assertTrue(expected[key] == res[key])
testrun_tests = self.session.query(models.Test)\
.filter(models.Test.test_run_id == expected['id'])\
.all()
tests_names = [
test.name for test in testrun_tests
]
self.assertTrue(set(tests_names) == set(expected['tests']['names']))
self.assertTrue(
all(
[test.status == 'stopped' for test in testrun_tests]
)
)
class TestClusterRedployment(BaseTestController):
@classmethod
def setUpClass(cls):
super(TestClusterRedployment, cls).setUpClass()
def setUp(self):
super(TestClusterRedployment, self).setUp()
self.controller = controllers.TestsetsController()
def tearDown(self):
super(TestClusterRedployment, self).tearDown()
def test_cluster_redeployment_with_different_tags(self):
expected = {
'cluster_id': 1,
'old_test_set_id': 'ha_deployment_test',
'new_test_set_id': 'multinode_deployment_test',
'old_depl_tags': ['ha', 'rhel'],
'new_depl_tags': ['multinode', 'ubuntu']
}
def discovery_mock(**kwargs):
kwargs['path'] = TEST_PATH
return discovery(**kwargs)
#start discoverying for testsets and tests for given cluster info
with patch(
'fuel_plugin.ostf_adapter.wsgi.wsgi_utils.nose_discovery.discovery',
discovery_mock
):
with patch(
'fuel_plugin.ostf_adapter.wsgi.wsgi_utils.conf',
self.pecan_conf_mock
):
self.controller.get(expected['cluster_id'])
test_set = self.session.query(models.TestSet)\
.filter_by(id=expected['old_test_set_id'])\
.filter_by(cluster_id=expected['cluster_id'])\
.first()
deployment_tags = test_set.deployment_tags if test_set.deployment_tags else []
self.assertTrue(
set(deployment_tags).issubset(expected['old_depl_tags'])
)
#patch request_to_nailgun function in orded to emulate
#redeployment of cluster
cluster_data = {
'mode': 'multinode',
'release': {
'operating_system': 'ubuntu'
}
}
with patch(
'fuel_plugin.ostf_adapter.wsgi.wsgi_utils._request_to_nailgun',
lambda *args: cluster_data
):
with patch(
'fuel_plugin.ostf_adapter.wsgi.wsgi_utils.nose_discovery.discovery',
discovery_mock
):
with patch(
'fuel_plugin.ostf_adapter.wsgi.wsgi_utils.conf',
self.pecan_conf_mock
):
res = self.controller.get(expected['cluster_id'])
#check wheter testset and bound with it test have been deleted from db
old_test_set = self.session.query(models.TestSet)\
.filter_by(id=expected['old_test_set_id'])\
.filter_by(cluster_id=expected['cluster_id'])\
.first()
if old_test_set:
raise AssertionError(
"There must not be test_set for old deployment in db"
)
old_tests = self.session.query(models.Test)\
.filter_by(test_set_id=expected['old_test_set_id'])\
.filter_by(cluster_id=expected['cluster_id'])\
.all()
if old_test_set:
raise AssertionError(
"There must not be tests for old deployment in db"
)
#check whether new test set and tests are present in db
#after 'redeployment' of cluster
new_test_set = self.session.query(models.TestSet)\
.filter_by(id=expected['new_test_set_id'])\
.filter_by(cluster_id=expected['cluster_id'])\
.first()
self.assertTrue(new_test_set)
deployment_tags = new_test_set.deployment_tags if new_test_set.deployment_tags else []
self.assertTrue(
set(deployment_tags).issubset(expected['new_depl_tags'])
)
new_tests = self.session.query(models.Test)\
.filter_by(test_set_id=expected['new_test_set_id'])\
.filter_by(cluster_id=expected['cluster_id'])\
.all()
self.assertTrue(new_tests)
for test in new_tests:
deployment_tags = test.deployment_tags if test.deployment_tags else []
self.assertTrue(
set(deployment_tags).issubset(expected['new_depl_tags'])
)

View File

@ -4,7 +4,7 @@
# and then run "tox" from this directory.
[tox]
envlist = py27, pep8, cover
envlist = py26, py27, pep8, cover
[testenv]
commands = nosetests fuel_plugin.tests.unit