add a place for functional test to block specific regressions

We get some bugs reported in by users which we can replicate and fix
most easily in a full stack way. Having a dedicated place to put these
seems like a good idea. They are in no way an attempt to test all the
things, they are specific tests that demonstrate a bug, and ensure
that once that particular bug is fixed, it will not come back.

Change-Id: Idfd8680133197beabaf6e1ac8df7490c2a345b17
Related-Bug: #1522536
This commit is contained in:
Sean Dague 2016-02-18 14:12:41 -05:00
parent 5d97e62d19
commit 5fe04c2ee8
3 changed files with 94 additions and 0 deletions

View File

@ -0,0 +1,24 @@
================================
Tests for Specific Regressions
================================
When we have a bug reported by end users that we can write a full
stack reproduce on, we should. And we should keep a regression test
for that bug in our tree. It can be deleted at some future date if
needed, but largely should not be changed.
Writing Regression Tests
========================
- These should be full stack tests which inherit from
nova.test.TestCase directly. (This is to prevent coupling with other tests).
- They should setup a full stack cloud in their setUp via fixtures
- They should each live in a file which is named test_bug_######.py
Writing Tests Before the Bug is Fixed
=====================================
TODO describe writing and landing tests before the bug is fixed as a
reproduce.

View File

@ -0,0 +1,70 @@
# Copyright 2016 HPE, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
import nova.scheduler.utils
import nova.servicegroup
from nova import test
from nova.tests import fixtures as nova_fixtures
from nova.tests.functional.api import client
from nova.tests.unit import cast_as_call
import nova.tests.unit.image.fake
from nova.tests.unit import policy_fixture
CONF = cfg.CONF
class TestServerGet(test.TestCase):
REQUIRES_LOCKING = True
def setUp(self):
super(TestServerGet, self).setUp()
self.useFixture(policy_fixture.RealPolicyFixture())
api_fixture = self.useFixture(nova_fixtures.OSAPIFixture(
api_version='v2.1'))
self.api = api_fixture.api
# the image fake backend needed for image discovery
nova.tests.unit.image.fake.stub_out_image_service(self)
self.start_service('conductor', manager=CONF.conductor.manager)
self.flags(scheduler_driver='chance_scheduler')
self.start_service('scheduler')
self.network = self.start_service('network')
self.compute = self.start_service('compute')
self.useFixture(cast_as_call.CastAsCall(self.stubs))
self.addCleanup(nova.tests.unit.image.fake.FakeImageService_reset)
self.image_id = self.api.get_images()[0]['id']
self.flavor_id = self.api.get_flavors()[0]['id']
def test_id_overlap(self):
"""Regression test for bug #1522536.
Before fixing this bug, getting a numeric id caused a 500
error because it treated the numeric value as the db index,
fetched the server, but then processing of extensions blew up.
Since we have fixed this bug it returns a 404, which is
expected. In future a 400 might be more appropriate.
"""
server = dict(name='server1',
imageRef=self.image_id,
flavorRef=self.flavor_id)
self.api.post_server({'server': server})
self.assertRaises(client.OpenStackApiNotFoundException,
self.api.get_server, 1)