make failures on api_samples more clear

This changes the failure mode for api_samples in 2 ways. The first of
which is it pretty prints the 2 documents that are compared whenever
there is a failure. This makes finding the differences by eye a lot
easier.

Secondly, it adds a short circuit when comparing lists, if they are
both of length 1. The list comparison is weird, and you get very
unhelpful error messages if you are comparing 2 lists of length 1 and
one has extra keys. It doesn't help all situations, but this is pretty
common, so will make these a ton easier to figure out what's wrong.

All of this is extremely helpful in figuring out what you did wrong in
decomposing an API extension. And thus is part of

blueprint api-no-more-extensions

Change-Id: I5f487c6798485a58cc020723dd5e880720c51bed
This commit is contained in:
Sean Dague 2016-06-21 13:36:19 -04:00
parent 41616f9d55
commit 574a9a3d53
1 changed files with 38 additions and 2 deletions

View File

@ -14,6 +14,7 @@
# under the License.
import os
import pprint
import re
from oslo_serialization import jsonutils
@ -24,6 +25,9 @@ from nova.tests.functional import integrated_helpers
PROJECT_ID = "6f70656e737461636b20342065766572"
# for pretty printing errors
pp = pprint.PrettyPrinter(indent=4)
class NoMatch(test.TestingException):
pass
@ -185,6 +189,24 @@ class ApiSampleTestBase(integrated_helpers._IntegratedTestBase):
expected = expected[:]
extra = []
# if it's a list of 1, do the simple compare which gives a
# better error message.
if len(result) == len(expected) == 1:
return self._compare_result(expected[0], result[0], result_str)
# This is clever enough to need some explanation. What we
# are doing here is looping the result list, and trying to
# compare it to every item in the expected list. If there
# is more than one, we're going to get fails. We ignore
# those. But every time we match an expected we drop it,
# and break to the next iteration. Every time we hit the
# end of the iteration, we add our results into a bucket
# of non matched.
#
# This results in poor error messages because we don't
# really know why the elements failed to match each
# other. A more complicated diff might be nice.
for res_obj in result:
for i, ex_obj in enumerate(expected):
try:
@ -347,6 +369,15 @@ class ApiSampleTestBase(integrated_helpers._IntegratedTestBase):
response_data = objectify(response_data)
response_result = self._compare_result(template_data,
response_data, "Response")
except NoMatch as e:
raise NoMatch("\nFailed to match Template to Response: \n%s\n"
"Template: %s\n\n"
"Respones: %s\n\n" %
(e,
pp.pformat(template_data),
pp.pformat(response_data)))
try:
# NOTE(danms): replace some of the subs with patterns for the
# doc/api_samples check, which won't have things like the
# correct compute host name. Also let the test do some of its
@ -361,8 +392,13 @@ class ApiSampleTestBase(integrated_helpers._IntegratedTestBase):
sample_data = objectify(sample_data)
self._compare_result(template_data, sample_data, "Sample")
return response_result
except NoMatch:
raise
except NoMatch as e:
raise NoMatch("\nFailed to match Template to Sample: \n%s\n"
"Template: %s\n\n"
"Sample: %s\n\n" %
(e,
pp.pformat(template_data),
pp.pformat(sample_data)))
def _get_host(self):
return 'http://openstack.example.com'