Retire Packaging Deb project repos

This commit is part of a series to retire the Packaging Deb
project. Step 2 is to remove all content from the project
repos, replacing it with a README notification where to find
ongoing work, and how to recover the repo if needed at some
future point (as in
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project).

Change-Id: Iba024665b04ba63cde470556068294541a48ac57
This commit is contained in:
Tony Breeds
2017-09-12 16:12:39 -06:00
parent 33dc6c8c67
commit a5af506d46
103 changed files with 14 additions and 25870 deletions

21
.gitignore vendored
View File

@@ -1,21 +0,0 @@
__pycache__
./build
MANIFEST
dist
tags
TAGS
apidocs
_trial_temp
doc/_build
.testrepository
.lp_creds
./testtools.egg-info
*.pyc
*.swp
*~
testtools.egg-info
/build/
/.env/
/.eggs/
AUTHORS
ChangeLog

View File

@@ -1,4 +0,0 @@
[gerrit]
host=review.testing-cabal.org
port=29418
project=testing-cabal/testtools.git

View File

@@ -1,4 +0,0 @@
[DEFAULT]
test_command=${PYTHON:-python} -m subunit.run $LISTOPT $IDOPTION testtools.tests.test_suite
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@@ -1,20 +0,0 @@
language: python
python:
- "2.7"
- "3.3"
- "3.4"
- "3.5"
- "3.6"
- "pypy"
install:
- pip install -U pip wheel setuptools
- pip install sphinx Twisted
- pip install .[test]
script:
- python -m testtools.run testtools.tests.test_suite
# Sphinx only supports 2.7 or >= 3.4
- if [ ${TRAVIS_PYTHON_VERSION} = "3.3" ]; then travis_terminate 0; fi
- make clean-sphinx docs

62
LICENSE
View File

@@ -1,62 +0,0 @@
Copyright (c) 2008-2011 Jonathan M. Lange <jml@mumak.net> and the testtools
authors.
The testtools authors are:
* Canonical Ltd
* Twisted Matrix Labs
* Jonathan Lange
* Robert Collins
* Andrew Bennetts
* Benjamin Peterson
* Jamu Kakar
* James Westby
* Martin [gz]
* Michael Hudson-Doyle
* Aaron Bentley
* Christian Kampka
* Gavin Panella
* Martin Pool
* Vincent Ladeuil
* Nikola Đipanov
* Tristan Seligmann
* Julian Edwards
* Jonathan Jacobs
and are collectively referred to as "testtools developers".
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Some code in testtools/run.py taken from Python's unittest module:
Copyright (c) 1999-2003 Steve Purcell
Copyright (c) 2003-2010 Python Software Foundation
This module is free software, and you may redistribute it and/or modify
it under the same terms as Python itself, so long as this copyright message
and disclaimer are retained in their original form.
IN NO EVENT SHALL THE AUTHOR BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT,
SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OF
THIS CODE, EVEN IF THE AUTHOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.
THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE. THE CODE PROVIDED HEREUNDER IS ON AN "AS IS" BASIS,
AND THERE IS NO OBLIGATION WHATSOEVER TO PROVIDE MAINTENANCE,
SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.

View File

@@ -1,7 +0,0 @@
include LICENSE
include Makefile
include MANIFEST.in
include NEWS
include README.rst
include .gitignore
prune doc/_build

View File

@@ -1,56 +0,0 @@
# Copyright (c) 2008-2013 testtools developers. See LICENSE for details.
PYTHON=python
SOURCES=$(shell find testtools -name "*.py")
check:
PYTHONPATH=$(PWD) $(PYTHON) -m testtools.run testtools.tests.test_suite
TAGS: ${SOURCES}
ctags -e -R testtools/
tags: ${SOURCES}
ctags -R testtools/
clean: clean-sphinx
rm -f TAGS tags
find testtools -name "*.pyc" -exec rm '{}' \;
prerelease:
# An existing MANIFEST breaks distutils sometimes. Avoid that.
-rm MANIFEST
release:
./setup.py sdist bdist_wheel upload --sign
$(PYTHON) scripts/_lp_release.py
snapshot: prerelease
./setup.py sdist bdist_wheel
### Documentation ###
apidocs:
# pydoctor emits deprecation warnings under Ubuntu 10.10 LTS
PYTHONWARNINGS='ignore::DeprecationWarning' \
pydoctor --make-html --add-package testtools \
--docformat=restructuredtext --project-name=testtools \
--project-url=https://github.com/testing-cabal/testtools
doc/news.rst:
ln -s ../NEWS doc/news.rst
docs: doc/news.rst docs-sphinx
rm doc/news.rst
docs-sphinx: html-sphinx
# Clean out generated documentation
clean-sphinx:
cd doc && make clean
# Build the html docs using Sphinx.
html-sphinx:
cd doc && make html
.PHONY: apidocs docs-sphinx clean-sphinx html-sphinx docs
.PHONY: check clean prerelease release

1704
NEWS

File diff suppressed because it is too large Load Diff

14
README Normal file
View File

@@ -0,0 +1,14 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For ongoing work on maintaining OpenStack packages in the Debian
distribution, please see the Debian OpenStack packaging team at
https://wiki.debian.org/OpenStack/.
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@@ -1,95 +0,0 @@
=========
testtools
=========
testtools is a set of extensions to the Python standard library's unit testing
framework.
These extensions have been derived from years of experience with unit testing
in Python and come from many different sources.
Documentation
-------------
If you would like to learn more about testtools, consult our documentation in
the 'doc/' directory. You might like to start at 'doc/overview.rst' or
'doc/for-test-authors.rst'.
Licensing
---------
This project is distributed under the MIT license and copyright is owned by
Jonathan M. Lange and the testtools authors. See LICENSE for details.
Some code in 'testtools/run.py' is taken from Python's unittest module, and is
copyright Steve Purcell and the Python Software Foundation, it is distributed
under the same license as Python, see LICENSE for details.
Supported platforms
-------------------
* Python 2.7+ or 3.3+ / pypy (2.x+)
If you would like to use testtools for earlier Pythons, please use testtools
1.9.0, or for *really* old Pythons, testtools 0.9.15.
testtools probably works on all OSes that Python works on, but is most heavily
tested on Linux and OS X.
Optional Dependencies
---------------------
If you would like to use our Twisted support, then you will need Twisted.
If you want to use ``fixtures`` then you can either install fixtures (e.g. from
https://launchpad.net/python-fixtures or http://pypi.python.org/pypi/fixtures)
or alternatively just make sure your fixture objects obey the same protocol.
Bug reports and patches
-----------------------
Please report bugs using Launchpad at <https://bugs.launchpad.net/testtools>.
Patches should be submitted as Github pull requests, or mailed to the authors.
See ``doc/hacking.rst`` for more details.
There's no mailing list for this project yet, however the testing-in-python
mailing list may be a useful resource:
* Address: testing-in-python@lists.idyll.org
* Subscription link: http://lists.idyll.org/listinfo/testing-in-python
History
-------
testtools used to be called 'pyunit3k'. The name was changed to avoid
conflating the library with the Python 3.0 release (commonly referred to as
'py3k').
Thanks
------
* Canonical Ltd
* Bazaar
* Twisted Matrix Labs
* Robert Collins
* Andrew Bennetts
* Benjamin Peterson
* Jamu Kakar
* James Westby
* Martin [gz]
* Michael Hudson-Doyle
* Aaron Bentley
* Christian Kampka
* Gavin Panella
* Martin Pool
* Julia Varlamova
* ClusterHQ Ltd
* Tristan Seligmann
* Jonathan Jacobs

View File

@@ -1,89 +0,0 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
-rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/testtools.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/testtools.qhc"
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \
"run these through (pdf)latex."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."

View File

View File

View File

@@ -1,41 +0,0 @@
testtools API documentation
===========================
Generated reference documentation for all the public functionality of
testtools.
Please :doc:`send patches </hacking>` if you notice anything confusing or
wrong, or that could be improved.
.. toctree::
:maxdepth: 2
testtools
---------
.. automodule:: testtools
:members:
testtools.assertions
--------------------
.. automodule:: testtools.assertions
:members:
testtools.matchers
------------------
.. automodule:: testtools.matchers
:members:
testtools.twistedsupport
-------------------------
.. automodule:: testtools.twistedsupport
:members:
:special-members: __init__

View File

@@ -1,203 +0,0 @@
# -*- coding: utf-8 -*-
#
# testtools documentation build configuration file, created by
# sphinx-quickstart on Sun Nov 28 13:45:40 2010.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys, os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.append(os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'testtools'
copyright = u'2010-2016, The testtools authors'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = 'VERSION'
# The full version, including alpha/beta/rc tags.
release = 'VERSION'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of documents that shouldn't be included in the build.
#unused_docs = []
# List of directories, relative to source directory, that shouldn't be searched
# for source files.
exclude_trees = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_use_modindex = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = ''
# Output file base name for HTML help builder.
htmlhelp_basename = 'testtoolsdoc'
# -- Options for LaTeX output --------------------------------------------------
# The paper size ('letter' or 'a4').
#latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
#latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'testtools.tex', u'testtools Documentation',
u'The testtools authors', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# Additional stuff for the LaTeX preamble.
#latex_preamble = ''
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_use_modindex = True
intersphinx_mapping = {
'py2': ('https://docs.python.org/2', None),
'py3': ('https://docs.python.org/3', None),
'twisted': ('https://twistedmatrix.com/documents/current/api/', None),
}

View File

@@ -1,460 +0,0 @@
============================
testtools for framework folk
============================
Introduction
============
In addition to having many features :doc:`for test authors
<for-test-authors>`, testtools also has many bits and pieces that are useful
for folk who write testing frameworks.
If you are the author of a test runner, are working on a very large
unit-tested project, are trying to get one testing framework to play nicely
with another or are hacking away at getting your test suite to run in parallel
over a heterogenous cluster of machines, this guide is for you.
This manual is a summary. You can get details by consulting the
:doc:`testtools API docs </api>`.
Extensions to TestCase
======================
In addition to the ``TestCase`` specific methods, we have extensions for
``TestSuite`` that also apply to ``TestCase`` (because ``TestCase`` and
``TestSuite`` follow the Composite pattern).
Custom exception handling
-------------------------
testtools provides a way to control how test exceptions are handled. To do
this, add a new exception to ``self.exception_handlers`` on a
``testtools.TestCase``. For example::
>>> self.exception_handlers.insert(-1, (ExceptionClass, handler)).
Having done this, if any of ``setUp``, ``tearDown``, or the test method raise
``ExceptionClass``, ``handler`` will be called with the test case, test result
and the raised exception.
Use this if you want to add a new kind of test result, that is, if you think
that ``addError``, ``addFailure`` and so forth are not enough for your needs.
Controlling test execution
--------------------------
If you want to control more than just how exceptions are raised, you can
provide a custom ``RunTest`` to a ``TestCase``. The ``RunTest`` object can
change everything about how the test executes.
To work with ``testtools.TestCase``, a ``RunTest`` must have a factory that
takes a test and an optional list of exception handlers and an optional
last_resort handler. Instances returned by the factory must have a ``run()``
method that takes an optional ``TestResult`` object.
The default is ``testtools.runtest.RunTest``, which calls ``setUp``, the test
method, ``tearDown`` and clean ups (see :ref:`addCleanup`) in the normal, vanilla
way that Python's standard unittest_ does.
To specify a ``RunTest`` for all the tests in a ``TestCase`` class, do something
like this::
class SomeTests(TestCase):
run_tests_with = CustomRunTestFactory
To specify a ``RunTest`` for a specific test in a ``TestCase`` class, do::
class SomeTests(TestCase):
@run_test_with(CustomRunTestFactory, extra_arg=42, foo='whatever')
def test_something(self):
pass
In addition, either of these can be overridden by passing a factory in to the
``TestCase`` constructor with the optional ``runTest`` argument.
Test renaming
-------------
``testtools.clone_test_with_new_id`` is a function to copy a test case
instance to one with a new name. This is helpful for implementing test
parameterization.
.. _force_failure:
Delayed Test Failure
--------------------
Setting the ``testtools.TestCase.force_failure`` instance variable to True will
cause ``testtools.RunTest`` to fail the test case after the test has finished.
This is useful when you want to cause a test to fail, but don't want to
prevent the remainder of the test code from being executed.
Exception formatting
--------------------
Testtools ``TestCase`` instances format their own exceptions. The attribute
``__testtools_tb_locals__`` controls whether to include local variables in the
formatted exceptions.
Test placeholders
=================
Sometimes, it's useful to be able to add things to a test suite that are not
actually tests. For example, you might wish to represents import failures
that occur during test discovery as tests, so that your test result object
doesn't have to do special work to handle them nicely.
testtools provides two such objects, called "placeholders": ``PlaceHolder``
and ``ErrorHolder``. ``PlaceHolder`` takes a test id and an optional
description. When it's run, it succeeds. ``ErrorHolder`` takes a test id,
and error and an optional short description. When it's run, it reports that
error.
These placeholders are best used to log events that occur outside the test
suite proper, but are still very relevant to its results.
e.g.::
>>> suite = TestSuite()
>>> suite.add(PlaceHolder('I record an event'))
>>> suite.run(TextTestResult(verbose=True))
I record an event [OK]
Test instance decorators
========================
DecorateTestCaseResult
----------------------
This object calls out to your code when ``run`` / ``__call__`` are called and
allows the result object that will be used to run the test to be altered. This
is very useful when working with a test runner that doesn't know your test case
requirements. For instance, it can be used to inject a ``unittest2`` compatible
adapter when someone attempts to run your test suite with a ``TestResult`` that
does not support ``addSkip`` or other ``unittest2`` methods. Similarly it can
aid the migration to ``StreamResult``.
e.g.::
>>> suite = TestSuite()
>>> suite = DecorateTestCaseResult(suite, ExtendedToOriginalDecorator)
Extensions to TestResult
========================
StreamResult
------------
``StreamResult`` is a new API for dealing with test case progress that supports
concurrent and distributed testing without the various issues that
``TestResult`` has such as buffering in multiplexers.
The design has several key principles:
* Nothing that requires up-front knowledge of all tests.
* Deal with tests running in concurrent environments, potentially distributed
across multiple processes (or even machines). This implies allowing multiple
tests to be active at once, supplying time explicitly, being able to
differentiate between tests running in different contexts and removing any
assumption that tests are necessarily in the same process.
* Make the API as simple as possible - each aspect should do one thing well.
The ``TestResult`` API this is intended to replace has three different clients.
* Each executing ``TestCase`` notifies the ``TestResult`` about activity.
* The testrunner running tests uses the API to find out whether the test run
had errors, how many tests ran and so on.
* Finally, each ``TestCase`` queries the ``TestResult`` to see whether the test
run should be aborted.
With ``StreamResult`` we need to be able to provide a ``TestResult`` compatible
adapter (``StreamToExtendedDecorator``) to allow incremental migration.
However, we don't need to conflate things long term. So - we define three
separate APIs, and merely mix them together to provide the
``StreamToExtendedDecorator``. ``StreamResult`` is the first of these APIs -
meeting the needs of ``TestCase`` clients. It handles events generated by
running tests. See the API documentation for ``testtools.StreamResult`` for
details.
StreamSummary
-------------
Secondly we define the ``StreamSummary`` API which takes responsibility for
collating errors, detecting incomplete tests and counting tests. This provides
a compatible API with those aspects of ``TestResult``. Again, see the API
documentation for ``testtools.StreamSummary``.
TestControl
-----------
Lastly we define the ``TestControl`` API which is used to provide the
``shouldStop`` and ``stop`` elements from ``TestResult``. Again, see the API
documentation for ``testtools.TestControl``. ``TestControl`` can be paired with
a ``StreamFailFast`` to trigger aborting a test run when a failure is observed.
Aborting multiple workers in a distributed environment requires hooking
whatever signalling mechanism the distributed environment has up to a
``TestControl`` in each worker process.
StreamTagger
------------
A ``StreamResult`` filter that adds or removes tags from events::
>>> from testtools import StreamTagger
>>> sink = StreamResult()
>>> result = StreamTagger([sink], set(['add']), set(['discard']))
>>> result.startTestRun()
>>> # Run tests against result here.
>>> result.stopTestRun()
StreamToDict
------------
A simplified API for dealing with ``StreamResult`` streams. Each test is
buffered until it completes and then reported as a trivial dict. This makes
writing analysers very easy - you can ignore all the plumbing and just work
with the result. e.g.::
>>> from testtools import StreamToDict
>>> def handle_test(test_dict):
... print(test_dict['id'])
>>> result = StreamToDict(handle_test)
>>> result.startTestRun()
>>> # Run tests against result here.
>>> # At stopTestRun() any incomplete buffered tests are announced.
>>> result.stopTestRun()
ExtendedToStreamDecorator
-------------------------
This is a hybrid object that combines both the ``Extended`` and ``Stream``
``TestResult`` APIs into one class, but only emits ``StreamResult`` events.
This is useful when a ``StreamResult`` stream is desired, but you cannot
be sure that the tests which will run have been updated to the ``StreamResult``
API.
StreamToExtendedDecorator
-------------------------
This is a simple converter that emits the ``ExtendedTestResult`` API in
response to events from the ``StreamResult`` API. Useful when outputting
``StreamResult`` events from a ``TestCase`` but the supplied ``TestResult``
does not support the ``status`` and ``file`` methods.
StreamToQueue
-------------
This is a ``StreamResult`` decorator for reporting tests from multiple threads
at once. Each method submits an event to a supplied Queue object as a simple
dict. See ``ConcurrentStreamTestSuite`` for a convenient way to use this.
TimestampingStreamResult
------------------------
This is a ``StreamResult`` decorator for adding timestamps to events that lack
them. This allows writing the simplest possible generators of events and
passing the events via this decorator to get timestamped data. As long as
no buffering/queueing or blocking happen before the timestamper sees the event
the timestamp will be as accurate as if the original event had it.
StreamResultRouter
------------------
This is a ``StreamResult`` which forwards events to an arbitrary set of target
``StreamResult`` objects. Events that have no forwarding rule are passed onto
an fallback ``StreamResult`` for processing. The mapping can be changed at
runtime, allowing great flexibility and responsiveness to changes. Because
The mapping can change dynamically and there could be the same recipient for
two different maps, ``startTestRun`` and ``stopTestRun`` handling is fine
grained and up to the user.
If no fallback has been supplied, an unroutable event will raise an exception.
For instance::
>>> router = StreamResultRouter()
>>> sink = doubles.StreamResult()
>>> router.add_rule(sink, 'route_code_prefix', route_prefix='0',
... consume_route=True)
>>> router.status(test_id='foo', route_code='0/1', test_status='uxsuccess')
Would remove the ``0/`` from the route_code and forward the event like so::
>>> sink.status('test_id=foo', route_code='1', test_status='uxsuccess')
See ``pydoc testtools.StreamResultRouter`` for details.
TestResult.addSkip
------------------
This method is called on result objects when a test skips. The
``testtools.TestResult`` class records skips in its ``skip_reasons`` instance
dict. The can be reported on in much the same way as successful tests.
TestResult.time
---------------
This method controls the time used by a ``TestResult``, permitting accurate
timing of test results gathered on different machines or in different threads.
See pydoc testtools.TestResult.time for more details.
ThreadsafeForwardingResult
--------------------------
A ``TestResult`` which forwards activity to another test result, but synchronises
on a semaphore to ensure that all the activity for a single test arrives in a
batch. This allows simple TestResults which do not expect concurrent test
reporting to be fed the activity from multiple test threads, or processes.
Note that when you provide multiple errors for a single test, the target sees
each error as a distinct complete test.
MultiTestResult
---------------
A test result that dispatches its events to many test results. Use this
to combine multiple different test result objects into one test result object
that can be passed to ``TestCase.run()`` or similar. For example::
a = TestResult()
b = TestResult()
combined = MultiTestResult(a, b)
combined.startTestRun() # Calls a.startTestRun() and b.startTestRun()
Each of the methods on ``MultiTestResult`` will return a tuple of whatever the
component test results return.
TestResultDecorator
-------------------
Not strictly a ``TestResult``, but something that implements the extended
``TestResult`` interface of testtools. It can be subclassed to create objects
that wrap ``TestResults``.
TextTestResult
--------------
A ``TestResult`` that provides a text UI very similar to the Python standard
library UI. Key differences are that its supports the extended outcomes and
details API, and is completely encapsulated into the result object, permitting
it to be used without a 'TestRunner' object. Not all the Python 2.7 outcomes
are displayed (yet). It is also a 'quiet' result with no dots or verbose mode.
These limitations will be corrected soon.
ExtendedToOriginalDecorator
---------------------------
Adapts legacy ``TestResult`` objects, such as those found in older Pythons, to
meet the testtools ``TestResult`` API.
Test Doubles
------------
In testtools.testresult.doubles there are three test doubles that testtools
uses for its own testing: ``Python26TestResult``, ``Python27TestResult``,
``ExtendedTestResult``. These TestResult objects implement a single variation of
the TestResult API each, and log activity to a list ``self._events``. These are
made available for the convenience of people writing their own extensions.
startTestRun and stopTestRun
----------------------------
Python 2.7 added hooks ``startTestRun`` and ``stopTestRun`` which are called
before and after the entire test run. 'stopTestRun' is particularly useful for
test results that wish to produce summary output.
``testtools.TestResult`` provides default ``startTestRun`` and ``stopTestRun``
methods, and he default testtools runner will call these methods
appropriately.
The ``startTestRun`` method will reset any errors, failures and so forth on
the result, making the result object look as if no tests have been run.
Extensions to TestSuite
=======================
ConcurrentTestSuite
-------------------
A TestSuite for parallel testing. This is used in conjunction with a helper that
runs a single suite in some parallel fashion (for instance, forking, handing
off to a subprocess, to a compute cloud, or simple threads).
ConcurrentTestSuite uses the helper to get a number of separate runnable
objects with a run(result), runs them all in threads using the
ThreadsafeForwardingResult to coalesce their activity.
ConcurrentStreamTestSuite
-------------------------
A variant of ConcurrentTestSuite that uses the new StreamResult API instead of
the TestResult API. ConcurrentStreamTestSuite coordinates running some number
of test/suites concurrently, with one StreamToQueue per test/suite.
Each test/suite gets given its own ExtendedToStreamDecorator +
TimestampingStreamResult wrapped StreamToQueue instance, forwarding onto the
StreamResult that ConcurrentStreamTestSuite.run was called with.
ConcurrentStreamTestSuite is a thin shim and it is easy to implement your own
specialised form if that is needed.
FixtureSuite
------------
A test suite that sets up a fixture_ before running any tests, and then tears
it down after all of the tests are run. The fixture is *not* made available to
any of the tests due to there being no standard channel for suites to pass
information to the tests they contain (and we don't have enough data on what
such a channel would need to achieve to design a good one yet - or even decide
if it is a good idea).
sorted_tests
------------
Given the composite structure of TestSuite / TestCase, sorting tests is
problematic - you can't tell what functionality is embedded into custom Suite
implementations. In order to deliver consistent test orders when using test
discovery (see http://bugs.python.org/issue16709), testtools flattens and
sorts tests that have the standard TestSuite, and defines a new method
sort_tests, which can be used by non-standard TestSuites to know when they
should sort their tests. An example implementation can be seen at
``FixtureSuite.sorted_tests``.
If there are duplicate test ids in a suite, ValueError will be raised.
filter_by_ids
-------------
Similarly to ``sorted_tests`` running a subset of tests is problematic - the
standard run interface provides no way to limit what runs. Rather than
confounding the two problems (selection and execution) we defined a method
that filters the tests in a suite (or a case) by their unique test id.
If you a writing custom wrapping suites, consider implementing filter_by_ids
to support this (though most wrappers that subclass ``unittest.TestSuite`` will
work just fine [see ``testtools.testsuite.filter_by_ids`` for details.]
Extensions to TestRunner
========================
To facilitate custom listing of tests, ``testtools.run.TestProgram`` attempts
to call ``list`` on the ``TestRunner``, falling back to a generic
implementation if it is not present.
.. _unittest: http://docs.python.org/library/unittest.html
.. _fixture: http://pypi.python.org/pypi/fixtures

File diff suppressed because it is too large Load Diff

View File

@@ -1,207 +0,0 @@
=========================
Contributing to testtools
=========================
Bugs and patches
----------------
`File bugs <https://bugs.launchpad.net/testtools/+filebug>` on Launchpad, and
`send patches <https://github.com/testing-cabal/testtools/>` on Github.
Coding style
------------
In general, follow `PEP 8`_ except where consistency with the standard
library's unittest_ module would suggest otherwise.
testtools currently supports Python 2.6 and later, including Python 3.
Copyright assignment
--------------------
Part of testtools raison d'etre is to provide Python with improvements to the
testing code it ships. For that reason we require all contributions (that are
non-trivial) to meet one of the following rules:
* be inapplicable for inclusion in Python.
* be able to be included in Python without further contact with the contributor.
* be copyright assigned to Jonathan M. Lange.
Please pick one of these and specify it when contributing code to testtools.
Licensing
---------
All code that is not copyright assigned to Jonathan M. Lange (see Copyright
Assignment above) needs to be licensed under the `MIT license`_ that testtools
uses, so that testtools can ship it.
Building
--------
Building and installing testtools requires a reasonably recent version of pip.
At the time of writing, pip version 7.1.0 (which is bundled with virtualenv
13.1.0) is a good choice. To install testtools from source and all its test
dependencies, install the ``test`` extra::
pip install -e .[test]
Installing via ``python setup.py install`` may not work, due to issues with
easy_install.
Testing
-------
Please write tests for every feature. This project ought to be a model
example of well-tested Python code!
Take particular care to make sure the *intent* of each test is clear.
You can run tests with ``make check``.
By default, testtools hides many levels of its own stack when running tests.
This is for the convenience of users, who do not care about how, say, assert
methods are implemented. However, when writing tests for testtools itself, it
is often useful to see all levels of the stack. To do this, add
``run_tests_with = FullStackRunTest`` to the top of a test's class definition.
Discussion
----------
When submitting a patch, it will help the review process a lot if there's a
clear explanation of what the change does and why you think the change is a
good idea. For crasher bugs, this is generally a no-brainer, but for UI bugs
& API tweaks, the reason something is an improvement might not be obvious, so
it's worth spelling out.
If you are thinking of implementing a new feature, you might want to have that
discussion on the [mailing list](testtools-dev@lists.launchpad.net) before the
patch goes up for review. This is not at all mandatory, but getting feedback
early can help avoid dead ends.
Documentation
-------------
Documents are written using the Sphinx_ variant of reStructuredText_. All
public methods, functions, classes and modules must have API documentation.
When changing code, be sure to check the API documentation to see if it could
be improved. Before submitting changes to trunk, look over them and see if
the manuals ought to be updated.
Source layout
-------------
The top-level directory contains the ``testtools/`` package directory, and
miscellaneous files like ``README.rst`` and ``setup.py``.
The ``testtools/`` directory is the Python package itself. It is separated
into submodules for internal clarity, but all public APIs should be “promoted”
into the top-level package by importing them in ``testtools/__init__.py``.
Users of testtools should never import a submodule in order to use a stable
API. Unstable APIs like ``testtools.matchers`` and
``testtools.deferredruntest`` should be exported as submodules.
Tests belong in ``testtools/tests/``.
Committing to trunk
-------------------
Testtools is maintained using git, with its master repo at
https://github.com/testing-cabal/testtools. This gives every contributor the
ability to commit their work to their own branches. However permission must be
granted to allow contributors to commit to the trunk branch.
Commit access to trunk is obtained by joining the `testing-cabal`_, either as an
Owner or a Committer. Commit access is contingent on obeying the testtools
contribution policy, see `Copyright Assignment`_ above.
Code Review
-----------
All code must be reviewed before landing on trunk. The process is to create a
branch on Github, and make a pull request into trunk. It will then be reviewed
before it can be merged to trunk. It will be reviewed by someone:
* not the author
* a committer
As a special exception, since there are few testtools committers and thus
reviews are prone to blocking, a pull request from a committer that has not been
reviewed after 24 hours may be merged by that committer. When the team is larger
this policy will be revisited.
Code reviewers should look for the quality of what is being submitted,
including conformance with this HACKING file.
Changes which all users should be made aware of should be documented in NEWS.
We are now in full backwards compatibility mode - no more releases < 1.0.0, and
breaking compatibility will require consensus on the testtools-dev mailing list.
Exactly what constitutes a backwards incompatible change is vague, but coarsely:
* adding required arguments or required calls to something that used to work
* removing keyword or position arguments, removing methods, functions or modules
* changing behaviour someone may have reasonably depended on
Some things are not compatibility issues:
* changes to _ prefixed methods, functions, modules, packages.
NEWS management
---------------
The file NEWS is structured as a sorted list of releases. Each release can have
a free form description and more or more sections with bullet point items.
Sections in use today are 'Improvements' and 'Changes'. To ease merging between
branches, the bullet points are kept alphabetically sorted. The release NEXT is
permanently present at the top of the list.
Releasing
---------
Prerequisites
+++++++++++++
Membership in the testing-cabal org on github as committer.
Membership in the pypi testtools project as maintainer.
Membership in the https://launchpad.net/~testtools-committers.
No in-progress Critical bugs on the next_ milestone.
Tasks
+++++
#. Choose a version number, say X.Y.Z
#. Under NEXT in NEWS add a heading with the version number X.Y.Z.
#. Possibly write a blurb into NEWS.
#. Commit the changes.
#. Tag the release, ``git tag -s X.Y.Z -m "Releasing X.Y.Z"``
#. Run 'make release', this:
#. Creates a source distribution and uploads to PyPI
#. Ensures all Fix Committed bugs are in the release milestone
#. Makes a release on Launchpad and uploads the tarball
#. Marks all the Fix Committed bugs as Fix Released
#. Creates a new milestone
#. If a new series has been created (e.g. 0.10.0), make the series on Launchpad.
#. Push trunk to Github, ``git push --tags origin master``
.. _PEP 8: http://www.python.org/dev/peps/pep-0008/
.. _unittest: http://docs.python.org/library/unittest.html
.. _MIT license: http://www.opensource.org/licenses/mit-license.php
.. _Sphinx: http://sphinx.pocoo.org/
.. _restructuredtext: http://docutils.sourceforge.net/rst.html
.. _testing-cabal: https://github.com/organizations/testing-cabal/
.. _next: https://launchpad.net/testtools/+milestone/next

View File

@@ -1,37 +0,0 @@
.. testtools documentation master file, created by
sphinx-quickstart on Sun Nov 28 13:45:40 2010.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
testtools: tasteful testing for Python
======================================
testtools is a set of extensions to the Python standard library's unit testing
framework. These extensions have been derived from many years of experience
with unit testing in Python and come from many different sources. testtools
also ports recent unittest changes all the way back to Python 2.4. The next
release of testtools will change that to support versions that are maintained
by the Python community instead, to allow the use of modern language features
within testtools.
Contents:
.. toctree::
:maxdepth: 1
overview
for-test-authors
for-framework-folk
twisted-support
hacking
Changes to testtools <news>
API reference documentation <api>
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@@ -1,113 +0,0 @@
@ECHO OFF
REM Command file for Sphinx documentation
set SPHINXBUILD=sphinx-build
set BUILDDIR=_build
set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
if NOT "%PAPER%" == "" (
set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
)
if "%1" == "" goto help
if "%1" == "help" (
:help
echo.Please use `make ^<target^>` where ^<target^> is one of
echo. html to make standalone HTML files
echo. dirhtml to make HTML files named index.html in directories
echo. pickle to make pickle files
echo. json to make JSON files
echo. htmlhelp to make HTML files and a HTML help project
echo. qthelp to make HTML files and a qthelp project
echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
echo. changes to make an overview over all changed/added/deprecated items
echo. linkcheck to check all external links for integrity
echo. doctest to run all doctests embedded in the documentation if enabled
goto end
)
if "%1" == "clean" (
for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
del /q /s %BUILDDIR%\*
goto end
)
if "%1" == "html" (
%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/html.
goto end
)
if "%1" == "dirhtml" (
%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
goto end
)
if "%1" == "pickle" (
%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
echo.
echo.Build finished; now you can process the pickle files.
goto end
)
if "%1" == "json" (
%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
echo.
echo.Build finished; now you can process the JSON files.
goto end
)
if "%1" == "htmlhelp" (
%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
echo.
echo.Build finished; now you can run HTML Help Workshop with the ^
.hhp project file in %BUILDDIR%/htmlhelp.
goto end
)
if "%1" == "qthelp" (
%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
echo.
echo.Build finished; now you can run "qcollectiongenerator" with the ^
.qhcp project file in %BUILDDIR%/qthelp, like this:
echo.^> qcollectiongenerator %BUILDDIR%\qthelp\testtools.qhcp
echo.To view the help file:
echo.^> assistant -collectionFile %BUILDDIR%\qthelp\testtools.ghc
goto end
)
if "%1" == "latex" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
echo.
echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "changes" (
%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
echo.
echo.The overview file is in %BUILDDIR%/changes.
goto end
)
if "%1" == "linkcheck" (
%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
echo.
echo.Link check complete; look for any errors in the above output ^
or in %BUILDDIR%/linkcheck/output.txt.
goto end
)
if "%1" == "doctest" (
%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
echo.
echo.Testing of doctests in the sources finished, look at the ^
results in %BUILDDIR%/doctest/output.txt.
goto end
)
:end

View File

@@ -1,101 +0,0 @@
======================================
testtools: tasteful testing for Python
======================================
testtools is a set of extensions to the Python standard library's unit testing
framework. These extensions have been derived from many years of experience
with unit testing in Python and come from many different sources.
What better way to start than with a contrived code snippet?::
from testtools import TestCase
from testtools.content import Content
from testtools.content_type import UTF8_TEXT
from testtools.matchers import Equals
from myproject import SillySquareServer
class TestSillySquareServer(TestCase):
def setUp(self):
super(TestSillySquareServer, self).setUp()
self.server = self.useFixture(SillySquareServer())
self.addCleanup(self.attach_log_file)
def attach_log_file(self):
self.addDetail(
'log-file',
Content(UTF8_TEXT,
lambda: open(self.server.logfile, 'r').readlines()))
def test_server_is_cool(self):
self.assertThat(self.server.temperature, Equals("cool"))
def test_square(self):
self.assertThat(self.server.silly_square_of(7), Equals(49))
Why use testtools?
==================
Better assertion methods
------------------------
The standard assertion methods that come with unittest aren't as helpful as
they could be, and there aren't quite enough of them. testtools adds
``assertIn``, ``assertIs``, ``assertIsInstance`` and their negatives.
Matchers: better than assertion methods
---------------------------------------
Of course, in any serious project you want to be able to have assertions that
are specific to that project and the particular problem that it is addressing.
Rather than forcing you to define your own assertion methods and maintain your
own inheritance hierarchy of ``TestCase`` classes, testtools lets you write
your own "matchers", custom predicates that can be plugged into a unit test::
def test_response_has_bold(self):
# The response has bold text.
response = self.server.getResponse()
self.assertThat(response, HTMLContains(Tag('bold', 'b')))
More debugging info, when you need it
--------------------------------------
testtools makes it easy to add arbitrary data to your test result. If you
want to know what's in a log file when a test fails, or what the load was on
the computer when a test started, or what files were open, you can add that
information with ``TestCase.addDetail``, and it will appear in the test
results if that test fails.
Extend unittest, but stay compatible and re-usable
--------------------------------------------------
testtools goes to great lengths to allow serious test authors and test
*framework* authors to do whatever they like with their tests and their
extensions while staying compatible with the standard library's unittest.
testtools has completely parametrized how exceptions raised in tests are
mapped to ``TestResult`` methods and how tests are actually executed (ever
wanted ``tearDown`` to be called regardless of whether ``setUp`` succeeds?)
It also provides many simple but handy utilities, like the ability to clone a
test, a ``MultiTestResult`` object that lets many result objects get the
results from one test suite, adapters to bring legacy ``TestResult`` objects
into our new golden age.
Cross-Python compatibility
--------------------------
testtools gives you the very latest in unit testing technology in a way that
will work with Python 2.7, 3.3, 3.4, 3.5, and pypy.
If you wish to use testtools with Python 2.4 or 2.5, then please use testtools
0.9.15.
If you wish to use testtools with Python 2.6 or 3.2, then please use testtools
1.9.0.

View File

@@ -1,143 +0,0 @@
.. _twisted-support:
Twisted support
===============
testtools provides support for testing Twisted code.
Matching Deferreds
------------------
testtools provides support for making assertions about synchronous
:py:class:`~twisted.internet.defer.Deferred`\s.
A "synchronous" :py:class:`~twisted.internet.defer.Deferred` is one that does
not need the reactor or any other asynchronous process in order to fire.
Normal application code can't know when a
:py:class:`~twisted.internet.defer.Deferred` is going to fire, because that is
generally left up to the reactor. Well-written unit tests provide fake
reactors, or don't use the reactor at all, so that
:py:class:`~twisted.internet.defer.Deferred`\s fire synchronously.
These matchers allow you to make assertions about when and how
:py:class:`~twisted.internet.defer.Deferred`\s fire, and about what values
they fire with.
See also `Testing Deferreds without the reactor`_ and the `Deferred howto`_.
.. autofunction:: testtools.twistedsupport.succeeded
:noindex:
.. autofunction:: testtools.twistedsupport.failed
:noindex:
.. autofunction:: testtools.twistedsupport.has_no_result
:noindex:
Running tests in the reactor
----------------------------
testtools provides support for running asynchronous Twisted tests: tests that
return a :py:class:`~twisted.internet.defer.Deferred` and run the reactor
until it fires and its callback chain is completed.
Here's how to use it::
from testtools import TestCase
from testtools.twistedsupport import AsynchronousDeferredRunTest
class MyTwistedTests(TestCase):
run_tests_with = AsynchronousDeferredRunTest
def test_foo(self):
# ...
return d
Note that you do *not* have to use a special base ``TestCase`` in order to run
Twisted tests, you should just use the regular :py:class:`testtools.TestCase`
base class.
You can also run individual tests within a test case class using the Twisted
test runner::
class MyTestsSomeOfWhichAreTwisted(TestCase):
def test_normal(self):
pass
@run_test_with(AsynchronousDeferredRunTest)
def test_twisted(self):
# ...
return d
See :py:class:`~testtools.twistedsupport.AsynchronousDeferredRunTest` and
:py:class:`~testtools.twistedsupport.AsynchronousDeferredRunTestForBrokenTwisted`
for more information.
Controlling the Twisted logs
----------------------------
Users of Twisted Trial will be accustomed to all tests logging to
``_trial_temp/test.log``. By default,
:py:class:`~testtools.twistedsupport.AsynchronousDeferredRunTest` will *not*
do this, but will instead:
1. suppress all messages logged during the test run
2. attach them as the ``twisted-log`` detail (see :ref:`details`) which is
shown if the test fails
The first behavior is controlled by the ``suppress_twisted_logging`` parameter
to :py:class:`~testtools.twistedsupport.AsynchronousDeferredRunTest`, which is
set to ``True`` by default. The second is controlled by the
``store_twisted_logs`` parameter, which is also ``True`` by default.
If ``store_twisted_logs`` is set to ``False``, you can still get the logs
attached as a detail by using the
:py:class:`~testtools.twistedsupport.CaptureTwistedLogs` fixture. Using the
:py:class:`~testtools.twistedsupport.CaptureTwistedLogs` fixture is equivalent
to setting ``store_twisted_logs`` to ``True``.
For example::
class DoNotCaptureLogsTests(TestCase):
run_tests_with = partial(AsynchronousDeferredRunTest,
store_twisted_logs=False)
def test_foo(self):
log.msg('logs from this test are not attached')
def test_bar(self):
self.useFixture(CaptureTwistedLogs())
log.msg('logs from this test *are* attached')
Converting Trial tests to testtools tests
-----------------------------------------
* Use the :py:class:`~testtools.twistedsupport.AsynchronousDeferredRunTest` runner
* Make sure to upcall to :py:meth:`.TestCase.setUp` and
:py:meth:`.TestCase.tearDown`
* Don't use ``setUpClass`` or ``tearDownClass``
* Don't expect setting ``.todo``, ``.timeout`` or ``.skip`` attributes to do
anything
* Replace
:py:meth:`twisted.trial.unittest.SynchronousTestCase.flushLoggedErrors`
with
:py:func:`~testtools.twistedsupport.flush_logged_errors`
* Replace :py:meth:`twisted.trial.unittest.TestCase.assertFailure` with
:py:func:`~testtools.twistedsupport.assert_fails_with`
* Trial spins the reactor a couple of times before cleaning it up,
:py:class:`~testtools.twistedsupport.AsynchronousDeferredRunTest` does not. If
you rely on this behavior, use
:py:class:`~testtools.twistedsupport.AsynchronousDeferredRunTestForBrokenTwisted`.
.. _Deferred Howto: http://twistedmatrix.com/documents/current/core/howto/defer.html
.. _Testing Deferreds without the reactor:
http://twistedmatrix.com/documents/current/core/howto/trial.html#testing-deferreds-without-the-reactor

View File

@@ -1,10 +0,0 @@
# Since Twisted is an optional dependency, it doesn't get installed by `python
# setup.py install`. However, if Twisted is not installed, then the
# documentation for our Twisted support code won't render on readthedocs.
#
# Thus, this requirements.txt is specifically devoted to readthedocs.org, so
# that it knows exactly what to install in order to render the full
# documentation.
testtools[test]
Twisted

View File

@@ -1,9 +0,0 @@
pbr>=0.11
extras>=1.0.0
fixtures>=1.3.0
# 'mimeparse' has not been uploaded by the maintainer with Python3 compat
# but someone kindly uploaded a fixed version as 'python-mimeparse'.
python-mimeparse
unittest2>=1.0.0
traceback2
six>=1.4.0

View File

@@ -1,3 +0,0 @@
These are scripts to help with building, maintaining and releasing testtools.
There is little here for anyone except a testtools contributor.

View File

@@ -1,232 +0,0 @@
#!/usr/bin/python
"""Release testtools on Launchpad.
Steps:
1. Make sure all "Fix committed" bugs are assigned to 'next'
2. Rename 'next' to the new version
3. Release the milestone
4. Upload the tarball
5. Create a new 'next' milestone
6. Mark all "Fix committed" bugs in the milestone as "Fix released"
Assumes that NEWS is in the parent directory, that the release sections are
underlined with '~' and the subsections are underlined with '-'.
Assumes that this file is in the 'scripts' directory a testtools tree that has
already had a tarball built and uploaded with 'python setup.py sdist upload
--sign'.
"""
from datetime import datetime, timedelta, tzinfo
import logging
import os
import sys
from launchpadlib.launchpad import Launchpad
from launchpadlib import uris
APP_NAME = 'testtools-lp-release'
CACHE_DIR = os.path.expanduser('~/.launchpadlib/cache')
SERVICE_ROOT = uris.LPNET_SERVICE_ROOT
FIX_COMMITTED = u"Fix Committed"
FIX_RELEASED = u"Fix Released"
# Launchpad file type for a tarball upload.
CODE_RELEASE_TARBALL = 'Code Release Tarball'
PROJECT_NAME = 'testtools'
NEXT_MILESTONE_NAME = 'next'
class _UTC(tzinfo):
"""UTC"""
def utcoffset(self, dt):
return timedelta(0)
def tzname(self, dt):
return "UTC"
def dst(self, dt):
return timedelta(0)
UTC = _UTC()
def configure_logging():
level = logging.INFO
log = logging.getLogger(APP_NAME)
log.setLevel(level)
handler = logging.StreamHandler()
handler.setLevel(level)
formatter = logging.Formatter("%(levelname)s: %(message)s")
handler.setFormatter(formatter)
log.addHandler(handler)
return log
LOG = configure_logging()
def get_path(relpath):
"""Get the absolute path for something relative to this file."""
return os.path.abspath(
os.path.join(
os.path.dirname(os.path.dirname(__file__)), relpath))
def assign_fix_committed_to_next(testtools, next_milestone):
"""Find all 'Fix Committed' and make sure they are in 'next'."""
fixed_bugs = list(testtools.searchTasks(status=FIX_COMMITTED))
for task in fixed_bugs:
LOG.debug("%s" % (task.title,))
if task.milestone != next_milestone:
task.milestone = next_milestone
LOG.info("Re-assigning %s" % (task.title,))
task.lp_save()
def rename_milestone(next_milestone, new_name):
"""Rename 'next_milestone' to 'new_name'."""
LOG.info("Renaming %s to %s" % (next_milestone.name, new_name))
next_milestone.name = new_name
next_milestone.lp_save()
def get_release_notes_and_changelog(news_path):
release_notes = []
changelog = []
state = None
last_line = None
def is_heading_marker(line, marker_char):
return line and line == marker_char * len(line)
LOG.debug("Loading NEWS from %s" % (news_path,))
with open(news_path, 'r') as news:
for line in news:
line = line.strip()
if state is None:
if (is_heading_marker(line, '~') and
not last_line.startswith('NEXT')):
milestone_name = last_line
state = 'release-notes'
else:
last_line = line
elif state == 'title':
# The line after the title is a heading marker line, so we
# ignore it and change state. That which follows are the
# release notes.
state = 'release-notes'
elif state == 'release-notes':
if is_heading_marker(line, '-'):
state = 'changelog'
# Last line in the release notes is actually the first
# line of the changelog.
changelog = [release_notes.pop(), line]
else:
release_notes.append(line)
elif state == 'changelog':
if is_heading_marker(line, '~'):
# Last line in changelog is actually the first line of the
# next section.
changelog.pop()
break
else:
changelog.append(line)
else:
raise ValueError("Couldn't parse NEWS")
release_notes = '\n'.join(release_notes).strip() + '\n'
changelog = '\n'.join(changelog).strip() + '\n'
return milestone_name, release_notes, changelog
def release_milestone(milestone, release_notes, changelog):
date_released = datetime.now(tz=UTC)
LOG.info(
"Releasing milestone: %s, date %s" % (milestone.name, date_released))
release = milestone.createProductRelease(
date_released=date_released,
changelog=changelog,
release_notes=release_notes,
)
milestone.is_active = False
milestone.lp_save()
return release
def create_milestone(series, name):
"""Create a new milestone in the same series as 'release_milestone'."""
LOG.info("Creating milestone %s in series %s" % (name, series.name))
return series.newMilestone(name=name)
def close_fixed_bugs(milestone):
tasks = list(milestone.searchTasks())
for task in tasks:
LOG.debug("Found %s" % (task.title,))
if task.status == FIX_COMMITTED:
LOG.info("Closing %s" % (task.title,))
task.status = FIX_RELEASED
else:
LOG.warning(
"Bug not fixed, removing from milestone: %s" % (task.title,))
task.milestone = None
task.lp_save()
def upload_tarball(release, tarball_path):
with open(tarball_path) as tarball:
tarball_content = tarball.read()
sig_path = tarball_path + '.asc'
with open(sig_path) as sig:
sig_content = sig.read()
tarball_name = os.path.basename(tarball_path)
LOG.info("Uploading tarball: %s" % (tarball_path,))
release.add_file(
file_type=CODE_RELEASE_TARBALL,
file_content=tarball_content, filename=tarball_name,
signature_content=sig_content,
signature_filename=sig_path,
content_type="application/x-gzip; charset=binary")
def release_project(launchpad, project_name, next_milestone_name):
testtools = launchpad.projects[project_name]
next_milestone = testtools.getMilestone(name=next_milestone_name)
release_name, release_notes, changelog = get_release_notes_and_changelog(
get_path('NEWS'))
LOG.info("Releasing %s %s" % (project_name, release_name))
# Since reversing these operations is hard, and inspecting errors from
# Launchpad is also difficult, do some looking before leaping.
errors = []
tarball_path = get_path('dist/%s-%s.tar.gz' % (project_name, release_name,))
if not os.path.isfile(tarball_path):
errors.append("%s does not exist" % (tarball_path,))
if not os.path.isfile(tarball_path + '.asc'):
errors.append("%s does not exist" % (tarball_path + '.asc',))
if testtools.getMilestone(name=release_name):
errors.append("Milestone %s exists on %s" % (release_name, project_name))
if errors:
for error in errors:
LOG.error(error)
return 1
assign_fix_committed_to_next(testtools, next_milestone)
rename_milestone(next_milestone, release_name)
release = release_milestone(next_milestone, release_notes, changelog)
upload_tarball(release, tarball_path)
create_milestone(next_milestone.series_target, next_milestone_name)
close_fixed_bugs(next_milestone)
return 0
def main(args):
launchpad = Launchpad.login_with(
APP_NAME, SERVICE_ROOT, CACHE_DIR, credentials_file='.lp_creds')
return release_project(launchpad, PROJECT_NAME, NEXT_MILESTONE_NAME)
if __name__ == '__main__':
sys.exit(main(sys.argv))

View File

@@ -1,93 +0,0 @@
#!/usr/bin/python
"""Run the testtools test suite for all supported Pythons.
Prints output as a subunit test suite. If anything goes to stderr, that is
treated as a test error. If a Python is not available, then it is skipped.
"""
from datetime import datetime
import os
import subprocess
import sys
import subunit
from subunit import (
iso8601,
_make_stream_binary,
TestProtocolClient,
TestProtocolServer,
)
from testtools import (
PlaceHolder,
TestCase,
)
from testtools.compat import BytesIO
from testtools.content import text_content
ROOT = os.path.dirname(os.path.dirname(__file__))
def run_for_python(version, result, tests):
if not tests:
tests = ['testtools.tests.test_suite']
# XXX: This could probably be broken up and put into subunit.
python = 'python%s' % (version,)
# XXX: Correct API, but subunit doesn't support it. :(
# result.tags(set(python), set())
result.time(now())
test = PlaceHolder(''.join(c for c in python if c != '.'))
process = subprocess.Popen(
'%s -c pass' % (python,), shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
process.communicate()
if process.returncode:
result.startTest(test)
result.addSkip(test, reason='%s not available' % (python,))
result.stopTest(test)
return
env = os.environ.copy()
if env.get('PYTHONPATH', None):
env['PYTHONPATH'] = os.pathsep.join([ROOT, env['PYTHONPATH']])
else:
env['PYTHONPATH'] = ROOT
result.time(now())
protocol = TestProtocolServer(result)
subunit_path = os.path.join(os.path.dirname(subunit.__file__), 'run.py')
cmd = [
python,
'-W', 'ignore:Module testtools was already imported',
subunit_path]
cmd.extend(tests)
process = subprocess.Popen(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env)
_make_stream_binary(process.stdout)
_make_stream_binary(process.stderr)
# XXX: This buffers everything. Bad for memory, bad for getting progress
# on jenkins.
output, error = process.communicate()
protocol.readFrom(BytesIO(output))
if error:
result.startTest(test)
result.addError(test, details={
'stderr': text_content(error),
})
result.stopTest(test)
result.time(now())
# XXX: Correct API, but subunit doesn't support it. :(
#result.tags(set(), set(python))
def now():
return datetime.utcnow().replace(tzinfo=iso8601.Utc())
if __name__ == '__main__':
sys.path.append(ROOT)
result = TestProtocolClient(sys.stdout)
for version in '2.6 2.7 3.0 3.1 3.2'.split():
run_for_python(version, result, sys.argv[1:])

View File

@@ -1,11 +0,0 @@
#!/usr/bin/python
from StringIO import StringIO
from urllib2 import urlopen
WEB_HOOK = 'http://readthedocs.org/build/588'
if __name__ == '__main__':
urlopen(WEB_HOOK, data=' ')

View File

@@ -1,22 +0,0 @@
[metadata]
name = testtools
summary = Extensions to the Python standard library unit testing framework
home-page = https://github.com/testing-cabal/testtools
description-file = doc/overview.rst
author = Jonathan M. Lange
author-email = jml+testtools@mumak.net
classifier =
License :: OSI Approved :: MIT License
Programming Language :: Python :: 3
[extras]
test =
testscenarios
testresources
unittest2>=1.1.0
[files]
packages = testtools
[bdist_wheel]
universal = 1

View File

@@ -1,16 +0,0 @@
#!/usr/bin/env python
import setuptools
try:
import testtools
cmd_class = {}
if getattr(testtools, 'TestCommand', None) is not None:
cmd_class['test'] = testtools.TestCommand
except:
cmd_class = None
setuptools.setup(
cmdclass=cmd_class,
setup_requires=['pbr'],
pbr=True)

View File

@@ -1,131 +0,0 @@
# Copyright (c) 2008-2012 testtools developers. See LICENSE for details.
"""Extensions to the standard Python unittest library."""
__all__ = [
'clone_test_with_new_id',
'CopyStreamResult',
'ConcurrentTestSuite',
'ConcurrentStreamTestSuite',
'DecorateTestCaseResult',
'ErrorHolder',
'ExpectedException',
'ExtendedToOriginalDecorator',
'ExtendedToStreamDecorator',
'FixtureSuite',
'iterate_tests',
'MultipleExceptions',
'MultiTestResult',
'PlaceHolder',
'run_test_with',
'ResourcedToStreamDecorator',
'Tagger',
'TestCase',
'TestCommand',
'TestByTestResult',
'TestResult',
'TestResultDecorator',
'TextTestResult',
'RunTest',
'skip',
'skipIf',
'skipUnless',
'StreamFailFast',
'StreamResult',
'StreamResultRouter',
'StreamSummary',
'StreamTagger',
'StreamToDict',
'StreamToExtendedDecorator',
'StreamToQueue',
'TestControl',
'ThreadsafeForwardingResult',
'TimestampingStreamResult',
'try_import',
'try_imports',
'unique_text_generator',
]
# Compat - removal announced in 0.9.25.
try:
from extras import (
try_import,
try_imports,
)
except ImportError:
# Support reading __init__ for __version__ without extras, because pip does
# not support setup_requires.
pass
else:
from testtools.matchers._impl import (
Matcher,
)
# Shut up, pyflakes. We are importing for documentation, not for namespacing.
Matcher
from testtools.runtest import (
MultipleExceptions,
RunTest,
)
from testtools.testcase import (
DecorateTestCaseResult,
ErrorHolder,
ExpectedException,
PlaceHolder,
TestCase,
clone_test_with_new_id,
run_test_with,
skip,
skipIf,
skipUnless,
unique_text_generator,
)
from testtools.testresult import (
CopyStreamResult,
ExtendedToOriginalDecorator,
ExtendedToStreamDecorator,
MultiTestResult,
ResourcedToStreamDecorator,
StreamFailFast,
StreamResult,
StreamResultRouter,
StreamSummary,
StreamTagger,
StreamToDict,
StreamToExtendedDecorator,
StreamToQueue,
Tagger,
TestByTestResult,
TestControl,
TestResult,
TestResultDecorator,
TextTestResult,
ThreadsafeForwardingResult,
TimestampingStreamResult,
)
from testtools.testsuite import (
ConcurrentTestSuite,
ConcurrentStreamTestSuite,
FixtureSuite,
iterate_tests,
)
from testtools.distutilscmd import (
TestCommand,
)
# same format as sys.version_info: "A tuple containing the five components of
# the version number: major, minor, micro, releaselevel, and serial. All
# values except releaselevel are integers; the release level is 'alpha',
# 'beta', 'candidate', or 'final'. The version_info value corresponding to the
# Python version 2.0 is (2, 0, 0, 'final', 0)." Additionally we use a
# releaselevel of 'dev' for unreleased under-development code.
#
# If the releaselevel is 'alpha' then the major/minor/micro components are not
# established at this point, and setup.py will use a version of next-$(revno).
# If the releaselevel is 'final', then the tarball will be major.minor.micro.
# Otherwise it is major.minor.micro~$(revno).
from pbr.version import VersionInfo
_version = VersionInfo('testtools')
__version__ = _version.semantic_version().version_tuple()
version = _version.release_string()

View File

@@ -1,17 +0,0 @@
# Copyright (c) 2011 testtools developers. See LICENSE for details.
"""Compatibility helpers that are valid syntax in Python 2.x.
Only add things here if they *only* work in Python 2.x or are Python 2
alternatives to things that *only* work in Python 3.x.
"""
__all__ = [
'reraise',
]
def reraise(exc_class, exc_obj, exc_tb, _marker=object()):
"""Re-raise an exception received from sys.exc_info() or similar."""
raise exc_class, exc_obj, exc_tb

View File

@@ -1,17 +0,0 @@
# Copyright (c) 2011 testtools developers. See LICENSE for details.
"""Compatibility helpers that are valid syntax in Python 3.x.
Only add things here if they *only* work in Python 3.x or are Python 3
alternatives to things that *only* work in Python 2.x.
"""
__all__ = [
'reraise',
]
def reraise(exc_class, exc_obj, exc_tb, _marker=object()):
"""Re-raise an exception received from sys.exc_info() or similar."""
raise exc_obj.with_traceback(exc_tb)

View File

@@ -1,26 +0,0 @@
# Copyright (c) 2008-2017 testtools developers. See LICENSE for details.
"""Assertion helpers."""
from testtools.matchers import (
Annotate,
MismatchError,
)
def assert_that(matchee, matcher, message='', verbose=False):
"""Assert that matchee is matched by matcher.
This should only be used when you need to use a function based
matcher, assertThat in Testtools.Testcase is preferred and has more
features
:param matchee: An object to match with matcher.
:param matcher: An object meeting the testtools.Matcher protocol.
:raises MismatchError: When matcher does not match thing.
"""
matcher = Annotate.if_message(message, matcher)
mismatch = matcher.match(matchee)
if not mismatch:
return
raise MismatchError(matchee, matcher, mismatch, verbose)

View File

@@ -1,227 +0,0 @@
# Copyright (c) 2008-2015 testtools developers. See LICENSE for details.
"""Compatibility support for python 2 and 3."""
__metaclass__ = type
__all__ = [
'_b',
'_u',
'advance_iterator',
'BytesIO',
'classtypes',
'istext',
'str_is_unicode',
'StringIO',
'reraise',
'unicode_output_stream',
'text_or_bytes',
]
import codecs
import io
import locale
import os
import re
import sys
import traceback
import unicodedata
from extras import try_import, try_imports
BytesIO = try_imports(['StringIO.StringIO', 'io.BytesIO'])
StringIO = try_imports(['StringIO.StringIO', 'io.StringIO'])
# To let setup.py work, make this a conditional import.
linecache = try_import('linecache2')
try:
from testtools import _compat2x as _compat
except SyntaxError:
from testtools import _compat3x as _compat
reraise = _compat.reraise
__u_doc = """A function version of the 'u' prefix.
This is needed becayse the u prefix is not usable in Python 3 but is required
in Python 2 to get a unicode object.
To migrate code that was written as u'\u1234' in Python 2 to 2+3 change
it to be _u('\u1234'). The Python 3 interpreter will decode it
appropriately and the no-op _u for Python 3 lets it through, in Python
2 we then call unicode-escape in the _u function.
"""
if sys.version_info > (3, 0):
import builtins
def _u(s):
return s
_r = ascii
def _b(s):
"""A byte literal."""
return s.encode("latin-1")
advance_iterator = next
# GZ 2011-08-24: Seems istext() is easy to misuse and makes for bad code.
def istext(x):
return isinstance(x, str)
def classtypes():
return (type,)
str_is_unicode = True
text_or_bytes = (str, bytes)
else:
import __builtin__ as builtins
def _u(s):
# The double replace mangling going on prepares the string for
# unicode-escape - \foo is preserved, \u and \U are decoded.
return (s.replace("\\", "\\\\").replace("\\\\u", "\\u")
.replace("\\\\U", "\\U").decode("unicode-escape"))
_r = repr
def _b(s):
return s
advance_iterator = lambda it: it.next()
def istext(x):
return isinstance(x, basestring)
def classtypes():
import types
return (type, types.ClassType)
str_is_unicode = sys.platform == "cli"
text_or_bytes = (unicode, str)
_u.__doc__ = __u_doc
# GZ 2011-08-24: Using isinstance checks like this encourages bad interfaces,
# there should be better ways to write code needing this.
if not issubclass(getattr(builtins, "bytes", str), str):
def _isbytes(x):
return isinstance(x, bytes)
else:
# Never return True on Pythons that provide the name but not the real type
def _isbytes(x):
return False
def _slow_escape(text):
"""Escape unicode ``text`` leaving printable characters unmodified
The behaviour emulates the Python 3 implementation of repr, see
unicode_repr in unicodeobject.c and isprintable definition.
Because this iterates over the input a codepoint at a time, it's slow, and
does not handle astral characters correctly on Python builds with 16 bit
rather than 32 bit unicode type.
"""
output = []
for c in text:
o = ord(c)
if o < 256:
if o < 32 or 126 < o < 161:
output.append(c.encode("unicode-escape"))
elif o == 92:
# Separate due to bug in unicode-escape codec in Python 2.4
output.append("\\\\")
else:
output.append(c)
else:
# To get correct behaviour would need to pair up surrogates here
if unicodedata.category(c)[0] in "CZ":
output.append(c.encode("unicode-escape"))
else:
output.append(c)
return "".join(output)
def text_repr(text, multiline=None):
"""Rich repr for ``text`` returning unicode, triple quoted if ``multiline``.
"""
is_py3k = sys.version_info > (3, 0)
nl = _isbytes(text) and bytes((0xA,)) or "\n"
if multiline is None:
multiline = nl in text
if not multiline and (is_py3k or not str_is_unicode and type(text) is str):
# Use normal repr for single line of unicode on Python 3 or bytes
return repr(text)
prefix = repr(text[:0])[:-2]
if multiline:
# To escape multiline strings, split and process each line in turn,
# making sure that quotes are not escaped.
if is_py3k:
offset = len(prefix) + 1
lines = []
for l in text.split(nl):
r = repr(l)
q = r[-1]
lines.append(r[offset:-1].replace("\\" + q, q))
elif not str_is_unicode and isinstance(text, str):
lines = [l.encode("string-escape").replace("\\'", "'")
for l in text.split("\n")]
else:
lines = [_slow_escape(l) for l in text.split("\n")]
# Combine the escaped lines and append two of the closing quotes,
# then iterate over the result to escape triple quotes correctly.
_semi_done = "\n".join(lines) + "''"
p = 0
while True:
p = _semi_done.find("'''", p)
if p == -1:
break
_semi_done = "\\".join([_semi_done[:p], _semi_done[p:]])
p += 2
return "".join([prefix, "'''\\\n", _semi_done, "'"])
escaped_text = _slow_escape(text)
# Determine which quote character to use and if one gets prefixed with a
# backslash following the same logic Python uses for repr() on strings
quote = "'"
if "'" in text:
if '"' in text:
escaped_text = escaped_text.replace("'", "\\'")
else:
quote = '"'
return "".join([prefix, quote, escaped_text, quote])
def unicode_output_stream(stream):
"""Get wrapper for given stream that writes any unicode without exception
Characters that can't be coerced to the encoding of the stream, or 'ascii'
if valid encoding is not found, will be replaced. The original stream may
be returned in situations where a wrapper is determined unneeded.
The wrapper only allows unicode to be written, not non-ascii bytestrings,
which is a good thing to ensure sanity and sanitation.
"""
if (sys.platform == "cli" or
isinstance(stream, (io.TextIOWrapper, io.StringIO))):
# Best to never encode before writing in IronPython, or if it is
# already a TextIO [which in the io library has no encoding
# attribute).
return stream
try:
writer = codecs.getwriter(stream.encoding or "")
except (AttributeError, LookupError):
return codecs.getwriter("ascii")(stream, "replace")
if writer.__module__.rsplit(".", 1)[1].startswith("utf"):
# The current stream has a unicode encoding so no error handler is needed
if sys.version_info > (3, 0):
return stream
return writer(stream)
if sys.version_info > (3, 0):
# Python 3 doesn't seem to make this easy, handle a common case
try:
return stream.__class__(stream.buffer, stream.encoding, "replace",
stream.newlines, stream.line_buffering)
except AttributeError:
pass
return writer(stream, "replace")
def _get_exception_encoding():
"""Return the encoding we expect messages from the OS to be encoded in"""
if os.name == "nt":
# GZ 2010-05-24: Really want the codepage number instead, the error
# handling of standard codecs is more deterministic
return "mbcs"
# GZ 2010-05-23: We need this call to be after initialisation, but there's
# no benefit in asking more than once as it's a global
# setting that can change after the message is formatted.
return locale.getlocale(locale.LC_MESSAGES)[1] or "ascii"

View File

@@ -1,370 +0,0 @@
# Copyright (c) 2009-2012 testtools developers. See LICENSE for details.
"""Content - a MIME-like Content object."""
__all__ = [
'attach_file',
'Content',
'content_from_file',
'content_from_stream',
'json_content',
'text_content',
'TracebackContent',
]
import codecs
import inspect
import json
import os
import sys
from extras import try_import
# To let setup.py work, make this a conditional import.
traceback = try_import('traceback2')
from testtools.compat import (
_b,
_u,
istext,
str_is_unicode,
)
from testtools.content_type import ContentType, JSON, UTF8_TEXT
functools = try_import('functools')
_join_b = _b("").join
DEFAULT_CHUNK_SIZE = 4096
STDOUT_LINE = '\nStdout:\n%s'
STDERR_LINE = '\nStderr:\n%s'
def _iter_chunks(stream, chunk_size, seek_offset=None, seek_whence=0):
"""Read 'stream' in chunks of 'chunk_size'.
:param stream: A file-like object to read from.
:param chunk_size: The size of each read from 'stream'.
:param seek_offset: If non-None, seek before iterating.
:param seek_whence: Pass through to the seek call, if seeking.
"""
if seek_offset is not None:
stream.seek(seek_offset, seek_whence)
chunk = stream.read(chunk_size)
while chunk:
yield chunk
chunk = stream.read(chunk_size)
class Content(object):
"""A MIME-like Content object.
'Content' objects can be serialised to bytes using the iter_bytes method.
If the 'Content-Type' is recognised by other code, they are welcome to
look for richer contents that mere byte serialisation - for example in
memory object graphs etc. However, such code MUST be prepared to receive
a generic 'Content' object that has been reconstructed from a byte stream.
:ivar content_type: The content type of this Content.
"""
def __init__(self, content_type, get_bytes):
"""Create a ContentType."""
if None in (content_type, get_bytes):
raise ValueError("None not permitted in %r, %r" % (
content_type, get_bytes))
self.content_type = content_type
self._get_bytes = get_bytes
def __eq__(self, other):
return (self.content_type == other.content_type and
_join_b(self.iter_bytes()) == _join_b(other.iter_bytes()))
def as_text(self):
"""Return all of the content as text.
This is only valid where ``iter_text`` is. It will load all of the
content into memory. Where this is a concern, use ``iter_text``
instead.
"""
return _u('').join(self.iter_text())
def iter_bytes(self):
"""Iterate over bytestrings of the serialised content."""
return self._get_bytes()
def iter_text(self):
"""Iterate over the text of the serialised content.
This is only valid for text MIME types, and will use ISO-8859-1 if
no charset parameter is present in the MIME type. (This is somewhat
arbitrary, but consistent with RFC2617 3.7.1).
:raises ValueError: If the content type is not text/\\*.
"""
if self.content_type.type != "text":
raise ValueError("Not a text type %r" % self.content_type)
return self._iter_text()
def _iter_text(self):
"""Worker for iter_text - does the decoding."""
encoding = self.content_type.parameters.get('charset', 'ISO-8859-1')
decoder = codecs.getincrementaldecoder(encoding)()
for bytes in self.iter_bytes():
yield decoder.decode(bytes)
final = decoder.decode(_b(''), True)
if final:
yield final
def __repr__(self):
return "<Content type=%r, value=%r>" % (
self.content_type, _join_b(self.iter_bytes()))
class StackLinesContent(Content):
"""Content object for stack lines.
This adapts a list of "preprocessed" stack lines into a 'Content' object.
The stack lines are most likely produced from ``traceback.extract_stack``
or ``traceback.extract_tb``.
text/x-traceback;language=python is used for the mime type, in order to
provide room for other languages to format their tracebacks differently.
"""
# Whether or not to hide layers of the stack trace that are
# unittest/testtools internal code. Defaults to True since the
# system-under-test is rarely unittest or testtools.
HIDE_INTERNAL_STACK = True
def __init__(self, stack_lines, prefix_content="", postfix_content=""):
"""Create a StackLinesContent for ``stack_lines``.
:param stack_lines: A list of preprocessed stack lines, probably
obtained by calling ``traceback.extract_stack`` or
``traceback.extract_tb``.
:param prefix_content: If specified, a unicode string to prepend to the
text content.
:param postfix_content: If specified, a unicode string to append to the
text content.
"""
content_type = ContentType('text', 'x-traceback',
{"language": "python", "charset": "utf8"})
value = prefix_content + \
self._stack_lines_to_unicode(stack_lines) + \
postfix_content
super(StackLinesContent, self).__init__(
content_type, lambda: [value.encode("utf8")])
def _stack_lines_to_unicode(self, stack_lines):
"""Converts a list of pre-processed stack lines into a unicode string.
"""
msg_lines = traceback.format_list(stack_lines)
return _u('').join(msg_lines)
class TracebackContent(Content):
"""Content object for tracebacks.
This adapts an exc_info tuple to the 'Content' interface.
'text/x-traceback;language=python' is used for the mime type, in order to
provide room for other languages to format their tracebacks differently.
"""
def __init__(self, err, test, capture_locals=False):
"""Create a TracebackContent for ``err``.
:param err: An exc_info error tuple.
:param test: A test object used to obtain failureException.
:param capture_locals: If true, show locals in the traceback.
"""
if err is None:
raise ValueError("err may not be None")
exctype, value, tb = err
# Skip test runner traceback levels
if StackLinesContent.HIDE_INTERNAL_STACK:
while tb and '__unittest' in tb.tb_frame.f_globals:
tb = tb.tb_next
limit = None
# Disabled due to https://bugs.launchpad.net/testtools/+bug/1188420
if (False
and StackLinesContent.HIDE_INTERNAL_STACK
and test.failureException
and isinstance(value, test.failureException)):
# Skip assert*() traceback levels
limit = 0
while tb and not self._is_relevant_tb_level(tb):
limit += 1
tb = tb.tb_next
stack_lines = list(traceback.TracebackException(exctype, value, tb,
limit=limit, capture_locals=capture_locals).format())
content_type = ContentType('text', 'x-traceback',
{"language": "python", "charset": "utf8"})
super(TracebackContent, self).__init__(
content_type, lambda: [x.encode('utf8') for x in stack_lines])
def StacktraceContent(prefix_content="", postfix_content=""):
"""Content object for stack traces.
This function will create and return a 'Content' object that contains a
stack trace.
The mime type is set to 'text/x-traceback;language=python', so other
languages can format their stack traces differently.
:param prefix_content: A unicode string to add before the stack lines.
:param postfix_content: A unicode string to add after the stack lines.
"""
stack = traceback.walk_stack(None)
def filter_stack(stack):
# Discard the filter_stack frame.
next(stack)
# Discard the StacktraceContent frame.
next(stack)
for f, f_lineno in stack:
if StackLinesContent.HIDE_INTERNAL_STACK:
if '__unittest' in f.f_globals:
return
yield f, f_lineno
extract = traceback.StackSummary.extract(filter_stack(stack))
extract.reverse()
return StackLinesContent(extract, prefix_content, postfix_content)
def json_content(json_data):
"""Create a JSON Content object from JSON-encodeable data."""
data = json.dumps(json_data)
if str_is_unicode:
# The json module perversely returns native str not bytes
data = data.encode('utf8')
return Content(JSON, lambda: [data])
def text_content(text):
"""Create a Content object from some text.
This is useful for adding details which are short strings.
"""
if not istext(text):
raise TypeError(
"text_content must be given text, not '%s'." % type(text).__name__
)
return Content(UTF8_TEXT, lambda: [text.encode('utf8')])
def maybe_wrap(wrapper, func):
"""Merge metadata for func into wrapper if functools is present."""
if functools is not None:
wrapper = functools.update_wrapper(wrapper, func)
return wrapper
def content_from_file(path, content_type=None, chunk_size=DEFAULT_CHUNK_SIZE,
buffer_now=False, seek_offset=None, seek_whence=0):
"""Create a Content object from a file on disk.
Note that unless ``buffer_now`` is explicitly passed in as True, the file
will only be read from when ``iter_bytes`` is called.
:param path: The path to the file to be used as content.
:param content_type: The type of content. If not specified, defaults
to UTF8-encoded text/plain.
:param chunk_size: The size of chunks to read from the file.
Defaults to ``DEFAULT_CHUNK_SIZE``.
:param buffer_now: If True, read the file from disk now and keep it in
memory. Otherwise, only read when the content is serialized.
:param seek_offset: If non-None, seek within the stream before reading it.
:param seek_whence: If supplied, pass to ``stream.seek()`` when seeking.
"""
if content_type is None:
content_type = UTF8_TEXT
def reader():
with open(path, 'rb') as stream:
for chunk in _iter_chunks(stream,
chunk_size,
seek_offset,
seek_whence):
yield chunk
return content_from_reader(reader, content_type, buffer_now)
def content_from_stream(stream, content_type=None,
chunk_size=DEFAULT_CHUNK_SIZE, buffer_now=False,
seek_offset=None, seek_whence=0):
"""Create a Content object from a file-like stream.
Note that unless ``buffer_now`` is explicitly passed in as True, the stream
will only be read from when ``iter_bytes`` is called.
:param stream: A file-like object to read the content from. The stream
is not closed by this function or the 'Content' object it returns.
:param content_type: The type of content. If not specified, defaults
to UTF8-encoded text/plain.
:param chunk_size: The size of chunks to read from the file.
Defaults to ``DEFAULT_CHUNK_SIZE``.
:param buffer_now: If True, reads from the stream right now. Otherwise,
only reads when the content is serialized. Defaults to False.
:param seek_offset: If non-None, seek within the stream before reading it.
:param seek_whence: If supplied, pass to ``stream.seek()`` when seeking.
"""
if content_type is None:
content_type = UTF8_TEXT
reader = lambda: _iter_chunks(stream, chunk_size, seek_offset, seek_whence)
return content_from_reader(reader, content_type, buffer_now)
def content_from_reader(reader, content_type, buffer_now):
"""Create a Content object that will obtain the content from reader.
:param reader: A callback to read the content. Should return an iterable of
bytestrings.
:param content_type: The content type to create.
:param buffer_now: If True the reader is evaluated immediately and
buffered.
"""
if content_type is None:
content_type = UTF8_TEXT
if buffer_now:
contents = list(reader())
reader = lambda: contents
return Content(content_type, reader)
def attach_file(detailed, path, name=None, content_type=None,
chunk_size=DEFAULT_CHUNK_SIZE, buffer_now=True):
"""Attach a file to this test as a detail.
This is a convenience method wrapping around ``addDetail``.
Note that by default the contents of the file will be read immediately. If
``buffer_now`` is False, then the file *must* exist when the test result is
called with the results of this test, after the test has been torn down.
:param detailed: An object with details
:param path: The path to the file to attach.
:param name: The name to give to the detail for the attached file.
:param content_type: The content type of the file. If not provided,
defaults to UTF8-encoded text/plain.
:param chunk_size: The size of chunks to read from the file. Defaults
to something sensible.
:param buffer_now: If False the file content is read when the content
object is evaluated rather than when attach_file is called.
Note that this may be after any cleanups that obj_with_details has, so
if the file is a temporary file disabling buffer_now may cause the file
to be read after it is deleted. To handle those cases, using
attach_file as a cleanup is recommended because it guarantees a
sequence for when the attach_file call is made::
detailed.addCleanup(attach_file, 'foo.txt', detailed)
"""
if name is None:
name = os.path.basename(path)
content_object = content_from_file(
path, content_type, chunk_size, buffer_now)
detailed.addDetail(name, content_object)

View File

@@ -1,41 +0,0 @@
# Copyright (c) 2009-2012 testtools developers. See LICENSE for details.
"""ContentType - a MIME Content Type."""
class ContentType(object):
"""A content type from http://www.iana.org/assignments/media-types/
:ivar type: The primary type, e.g. "text" or "application"
:ivar subtype: The subtype, e.g. "plain" or "octet-stream"
:ivar parameters: A dict of additional parameters specific to the
content type.
"""
def __init__(self, primary_type, sub_type, parameters=None):
"""Create a ContentType."""
if None in (primary_type, sub_type):
raise ValueError("None not permitted in %r, %r" % (
primary_type, sub_type))
self.type = primary_type
self.subtype = sub_type
self.parameters = parameters or {}
def __eq__(self, other):
if type(other) != ContentType:
return False
return self.__dict__ == other.__dict__
def __repr__(self):
if self.parameters:
params = '; '
params += '; '.join(
sorted('%s="%s"' % (k, v) for k, v in self.parameters.items()))
else:
params = ''
return "%s/%s%s" % (self.type, self.subtype, params)
JSON = ContentType('application', 'json')
UTF8_TEXT = ContentType('text', 'plain', {'charset': 'utf8'})

View File

@@ -1,27 +0,0 @@
# Copyright (c) 2016 testtools developers. See LICENSE for details.
"""Backwards compatibility for testtools.twistedsupport."""
__all__ = [
'AsynchronousDeferredRunTest',
'AsynchronousDeferredRunTestForBrokenTwisted',
'SynchronousDeferredRunTest',
'assert_fails_with',
]
from .twistedsupport import (
AsynchronousDeferredRunTest,
AsynchronousDeferredRunTestForBrokenTwisted,
SynchronousDeferredRunTest,
assert_fails_with,
)
# Never explicitly exported but had public names:
from .twistedsupport import (
CaptureTwistedLogs,
flush_logged_errors,
)
from .twistedsupport._runtest import (
run_with_log_observers,
UncleanReactorError,
)

View File

@@ -1,62 +0,0 @@
# Copyright (c) 2010-2011 testtools developers . See LICENSE for details.
"""Extensions to the standard Python unittest library."""
import sys
from distutils.core import Command
from distutils.errors import DistutilsOptionError
from testtools.run import TestProgram, TestToolsTestRunner
class TestCommand(Command):
"""Command to run unit tests with testtools"""
description = "run unit tests with testtools"
user_options = [
('catch', 'c', "Catch ctrl-C and display results so far"),
('buffer', 'b', "Buffer stdout and stderr during tests"),
('failfast', 'f', "Stop on first fail or error"),
('test-module=','m', "Run 'test_suite' in specified module"),
('test-suite=','s',
"Test suite to run (e.g. 'some_module.test_suite')")
]
def __init__(self, dist):
Command.__init__(self, dist)
self.runner = TestToolsTestRunner(stdout=sys.stdout)
def initialize_options(self):
self.test_suite = None
self.test_module = None
self.catch = None
self.buffer = None
self.failfast = None
def finalize_options(self):
if self.test_suite is None:
if self.test_module is None:
raise DistutilsOptionError(
"You must specify a module or a suite to run tests from")
else:
self.test_suite = self.test_module+".test_suite"
elif self.test_module:
raise DistutilsOptionError(
"You may specify a module or a suite, but not both")
self.test_args = [self.test_suite]
if self.verbose:
self.test_args.insert(0, '--verbose')
if self.buffer:
self.test_args.insert(0, '--buffer')
if self.catch:
self.test_args.insert(0, '--catch')
if self.failfast:
self.test_args.insert(0, '--failfast')
def run(self):
self.program = TestProgram(
argv=self.test_args, testRunner=self.runner, stdout=sys.stdout,
exit=False)

View File

@@ -1,48 +0,0 @@
# Copyright (c) 2010-2012 testtools developers. See LICENSE for details.
__all__ = [
'safe_hasattr',
'try_import',
'try_imports',
]
import sys
# Compat - removal announced in 0.9.25.
from extras import (
safe_hasattr,
try_import,
try_imports,
)
def map_values(function, dictionary):
"""Map ``function`` across the values of ``dictionary``.
:return: A dict with the same keys as ``dictionary``, where the value
of each key ``k`` is ``function(dictionary[k])``.
"""
return dict((k, function(dictionary[k])) for k in dictionary)
def filter_values(function, dictionary):
"""Filter ``dictionary`` by its values using ``function``."""
return dict((k, v) for k, v in dictionary.items() if function(v))
def dict_subtract(a, b):
"""Return the part of ``a`` that's not in ``b``."""
return dict((k, a[k]) for k in set(a) - set(b))
def list_subtract(a, b):
"""Return a list ``a`` without the elements of ``b``.
If a particular value is in ``a`` twice and ``b`` once then the returned
list then that value will appear once in the returned list.
"""
a_only = list(a)
for x in b:
if x in a_only:
a_only.remove(x)
return a_only

View File

@@ -1,133 +0,0 @@
# Copyright (c) 2008-2012 testtools developers. See LICENSE for details.
"""All the matchers.
Matchers, a way to express complex assertions outside the testcase.
Inspired by 'hamcrest'.
Matcher provides the abstract API that all matchers need to implement.
Bundled matchers are listed in __all__: a list can be obtained by running
$ python -c 'import testtools.matchers; print testtools.matchers.__all__'
"""
__all__ = [
'AfterPreprocessing',
'AllMatch',
'Always',
'Annotate',
'AnyMatch',
'Contains',
'ContainsAll',
'ContainedByDict',
'ContainsDict',
'DirContains',
'DirExists',
'DocTestMatches',
'EndsWith',
'Equals',
'FileContains',
'FileExists',
'GreaterThan',
'HasLength',
'HasPermissions',
'Is',
'IsDeprecated',
'IsInstance',
'KeysEqual',
'LessThan',
'MatchesAll',
'MatchesAny',
'MatchesDict',
'MatchesException',
'MatchesListwise',
'MatchesPredicate',
'MatchesPredicateWithParams',
'MatchesRegex',
'MatchesSetwise',
'MatchesStructure',
'Never',
'NotEquals',
'Not',
'PathExists',
'Raises',
'raises',
'SamePath',
'StartsWith',
'TarballContains',
'Warnings',
'WarningMessage'
]
from ._basic import (
Contains,
EndsWith,
Equals,
GreaterThan,
HasLength,
Is,
IsInstance,
LessThan,
MatchesRegex,
NotEquals,
StartsWith,
)
from ._const import (
Always,
Never,
)
from ._datastructures import (
ContainsAll,
MatchesListwise,
MatchesSetwise,
MatchesStructure,
)
from ._dict import (
ContainedByDict,
ContainsDict,
KeysEqual,
MatchesDict,
)
from ._doctest import (
DocTestMatches,
)
from ._exception import (
MatchesException,
Raises,
raises,
)
from ._filesystem import (
DirContains,
DirExists,
FileContains,
FileExists,
HasPermissions,
PathExists,
SamePath,
TarballContains,
)
from ._higherorder import (
AfterPreprocessing,
AllMatch,
Annotate,
AnyMatch,
MatchesAll,
MatchesAny,
MatchesPredicate,
MatchesPredicateWithParams,
Not,
)
from ._warnings import (
IsDeprecated,
WarningMessage,
Warnings,
)
# XXX: These are not explicitly included in __all__. It's unclear how much of
# the public interface they really are.
from ._impl import (
Matcher,
Mismatch,
MismatchError,
)

View File

@@ -1,371 +0,0 @@
# Copyright (c) 2009-2012 testtools developers. See LICENSE for details.
__all__ = [
'Contains',
'EndsWith',
'Equals',
'GreaterThan',
'HasLength',
'Is',
'IsInstance',
'LessThan',
'MatchesRegex',
'NotEquals',
'StartsWith',
]
import operator
from pprint import pformat
import re
import warnings
from ..compat import (
_isbytes,
istext,
str_is_unicode,
text_repr,
)
from ..helpers import list_subtract
from ._higherorder import (
MatchesPredicateWithParams,
PostfixedMismatch,
)
from ._impl import (
Matcher,
Mismatch,
)
def _format(thing):
"""
Blocks of text with newlines are formatted as triple-quote
strings. Everything else is pretty-printed.
"""
if istext(thing) or _isbytes(thing):
return text_repr(thing)
return pformat(thing)
class _BinaryComparison(object):
"""Matcher that compares an object to another object."""
def __init__(self, expected):
self.expected = expected
def __str__(self):
return "%s(%r)" % (self.__class__.__name__, self.expected)
def match(self, other):
if self.comparator(other, self.expected):
return None
return _BinaryMismatch(other, self.mismatch_string, self.expected)
def comparator(self, expected, other):
raise NotImplementedError(self.comparator)
class _BinaryMismatch(Mismatch):
"""Two things did not match."""
def __init__(self, actual, mismatch_string, reference,
reference_on_right=True):
self._actual = actual
self._mismatch_string = mismatch_string
self._reference = reference
self._reference_on_right = reference_on_right
@property
def expected(self):
warnings.warn(
'%s.expected deprecated after 1.8.1' % (self.__class__.__name__,),
DeprecationWarning,
stacklevel=2,
)
return self._reference
@property
def other(self):
warnings.warn(
'%s.other deprecated after 1.8.1' % (self.__class__.__name__,),
DeprecationWarning,
stacklevel=2,
)
return self._actual
def describe(self):
actual = repr(self._actual)
reference = repr(self._reference)
if len(actual) + len(reference) > 70:
return "%s:\nreference = %s\nactual = %s\n" % (
self._mismatch_string, _format(self._reference),
_format(self._actual))
else:
if self._reference_on_right:
left, right = actual, reference
else:
left, right = reference, actual
return "%s %s %s" % (left, self._mismatch_string, right)
class Equals(_BinaryComparison):
"""Matches if the items are equal."""
comparator = operator.eq
mismatch_string = '!='
class _FlippedEquals(object):
"""Matches if the items are equal.
Exactly like ``Equals`` except that the short mismatch message is "
$reference != $actual" rather than "$actual != $reference". This allows
for ``TestCase.assertEqual`` to use a matcher but still have the order of
items in the error message align with the order of items in the call to
the assertion.
"""
def __init__(self, expected):
self._expected = expected
def match(self, other):
mismatch = Equals(self._expected).match(other)
if not mismatch:
return None
return _BinaryMismatch(other, '!=', self._expected, False)
class NotEquals(_BinaryComparison):
"""Matches if the items are not equal.
In most cases, this is equivalent to ``Not(Equals(foo))``. The difference
only matters when testing ``__ne__`` implementations.
"""
comparator = operator.ne
mismatch_string = '=='
class Is(_BinaryComparison):
"""Matches if the items are identical."""
comparator = operator.is_
mismatch_string = 'is not'
class LessThan(_BinaryComparison):
"""Matches if the item is less than the matchers reference object."""
comparator = operator.__lt__
mismatch_string = '>='
class GreaterThan(_BinaryComparison):
"""Matches if the item is greater than the matchers reference object."""
comparator = operator.__gt__
mismatch_string = '<='
class SameMembers(Matcher):
"""Matches if two iterators have the same members.
This is not the same as set equivalence. The two iterators must be of the
same length and have the same repetitions.
"""
def __init__(self, expected):
super(SameMembers, self).__init__()
self.expected = expected
def __str__(self):
return '%s(%r)' % (self.__class__.__name__, self.expected)
def match(self, observed):
expected_only = list_subtract(self.expected, observed)
observed_only = list_subtract(observed, self.expected)
if expected_only == observed_only == []:
return
return PostfixedMismatch(
"\nmissing: %s\nextra: %s" % (
_format(expected_only), _format(observed_only)),
_BinaryMismatch(observed, 'elements differ', self.expected))
class DoesNotStartWith(Mismatch):
def __init__(self, matchee, expected):
"""Create a DoesNotStartWith Mismatch.
:param matchee: the string that did not match.
:param expected: the string that 'matchee' was expected to start with.
"""
self.matchee = matchee
self.expected = expected
def describe(self):
return "%s does not start with %s." % (
text_repr(self.matchee), text_repr(self.expected))
class StartsWith(Matcher):
"""Checks whether one string starts with another."""
def __init__(self, expected):
"""Create a StartsWith Matcher.
:param expected: the string that matchees should start with.
"""
self.expected = expected
def __str__(self):
return "StartsWith(%r)" % (self.expected,)
def match(self, matchee):
if not matchee.startswith(self.expected):
return DoesNotStartWith(matchee, self.expected)
return None
class DoesNotEndWith(Mismatch):
def __init__(self, matchee, expected):
"""Create a DoesNotEndWith Mismatch.
:param matchee: the string that did not match.
:param expected: the string that 'matchee' was expected to end with.
"""
self.matchee = matchee
self.expected = expected
def describe(self):
return "%s does not end with %s." % (
text_repr(self.matchee), text_repr(self.expected))
class EndsWith(Matcher):
"""Checks whether one string ends with another."""
def __init__(self, expected):
"""Create a EndsWith Matcher.
:param expected: the string that matchees should end with.
"""
self.expected = expected
def __str__(self):
return "EndsWith(%r)" % (self.expected,)
def match(self, matchee):
if not matchee.endswith(self.expected):
return DoesNotEndWith(matchee, self.expected)
return None
class IsInstance(object):
"""Matcher that wraps isinstance."""
def __init__(self, *types):
self.types = tuple(types)
def __str__(self):
return "%s(%s)" % (self.__class__.__name__,
', '.join(type.__name__ for type in self.types))
def match(self, other):
if isinstance(other, self.types):
return None
return NotAnInstance(other, self.types)
class NotAnInstance(Mismatch):
def __init__(self, matchee, types):
"""Create a NotAnInstance Mismatch.
:param matchee: the thing which is not an instance of any of types.
:param types: A tuple of the types which were expected.
"""
self.matchee = matchee
self.types = types
def describe(self):
if len(self.types) == 1:
typestr = self.types[0].__name__
else:
typestr = 'any of (%s)' % ', '.join(type.__name__ for type in
self.types)
return "'%s' is not an instance of %s" % (self.matchee, typestr)
class DoesNotContain(Mismatch):
def __init__(self, matchee, needle):
"""Create a DoesNotContain Mismatch.
:param matchee: the object that did not contain needle.
:param needle: the needle that 'matchee' was expected to contain.
"""
self.matchee = matchee
self.needle = needle
def describe(self):
return "%r not in %r" % (self.needle, self.matchee)
class Contains(Matcher):
"""Checks whether something is contained in another thing."""
def __init__(self, needle):
"""Create a Contains Matcher.
:param needle: the thing that needs to be contained by matchees.
"""
self.needle = needle
def __str__(self):
return "Contains(%r)" % (self.needle,)
def match(self, matchee):
try:
if self.needle not in matchee:
return DoesNotContain(matchee, self.needle)
except TypeError:
# e.g. 1 in 2 will raise TypeError
return DoesNotContain(matchee, self.needle)
return None
class MatchesRegex(object):
"""Matches if the matchee is matched by a regular expression."""
def __init__(self, pattern, flags=0):
self.pattern = pattern
self.flags = flags
def __str__(self):
args = ['%r' % self.pattern]
flag_arg = []
# dir() sorts the attributes for us, so we don't need to do it again.
for flag in dir(re):
if len(flag) == 1:
if self.flags & getattr(re, flag):
flag_arg.append('re.%s' % flag)
if flag_arg:
args.append('|'.join(flag_arg))
return '%s(%s)' % (self.__class__.__name__, ', '.join(args))
def match(self, value):
if not re.match(self.pattern, value, self.flags):
pattern = self.pattern
if not isinstance(pattern, str_is_unicode and str or unicode):
pattern = pattern.decode("latin1")
pattern = pattern.encode("unicode_escape").decode("ascii")
return Mismatch("%r does not match /%s/" % (
value, pattern.replace("\\\\", "\\")))
def has_len(x, y):
return len(x) == y
HasLength = MatchesPredicateWithParams(has_len, "len({0}) != {1}", "HasLength")

View File

@@ -1,58 +0,0 @@
# Copyright (c) 2016 testtools developers. See LICENSE for details.
__all__ = [
'Always',
'Never',
]
from testtools.compat import _u
from ._impl import Mismatch
class _Always(object):
"""Always matches."""
def __str__(self):
return 'Always()'
def match(self, value):
return None
def Always():
"""Always match.
That is::
self.assertThat(x, Always())
Will always match and never fail, no matter what ``x`` is. Most useful when
passed to other higher-order matchers (e.g.
:py:class:`~testtools.matchers.MatchesListwise`).
"""
return _Always()
class _Never(object):
"""Never matches."""
def __str__(self):
return 'Never()'
def match(self, value):
return Mismatch(
_u('Inevitable mismatch on %r' % (value,)))
def Never():
"""Never match.
That is::
self.assertThat(x, Never())
Will never match and always fail, no matter what ``x`` is. Included for
completeness with :py:func:`.Always`, but if you find a use for this, let
us know!
"""
return _Never()

View File

@@ -1,228 +0,0 @@
# Copyright (c) 2009-2015 testtools developers. See LICENSE for details.
__all__ = [
'ContainsAll',
'MatchesListwise',
'MatchesSetwise',
'MatchesStructure',
]
"""Matchers that operate with knowledge of Python data structures."""
from ..helpers import map_values
from ._higherorder import (
Annotate,
MatchesAll,
MismatchesAll,
)
from ._impl import Mismatch
def ContainsAll(items):
"""Make a matcher that checks whether a list of things is contained
in another thing.
The matcher effectively checks that the provided sequence is a subset of
the matchee.
"""
from ._basic import Contains
return MatchesAll(*map(Contains, items), first_only=False)
class MatchesListwise(object):
"""Matches if each matcher matches the corresponding value.
More easily explained by example than in words:
>>> from ._basic import Equals
>>> MatchesListwise([Equals(1)]).match([1])
>>> MatchesListwise([Equals(1), Equals(2)]).match([1, 2])
>>> print (MatchesListwise([Equals(1), Equals(2)]).match([2, 1]).describe())
Differences: [
2 != 1
1 != 2
]
>>> matcher = MatchesListwise([Equals(1), Equals(2)], first_only=True)
>>> print (matcher.match([3, 4]).describe())
3 != 1
"""
def __init__(self, matchers, first_only=False):
"""Construct a MatchesListwise matcher.
:param matchers: A list of matcher that the matched values must match.
:param first_only: If True, then only report the first mismatch,
otherwise report all of them. Defaults to False.
"""
self.matchers = matchers
self.first_only = first_only
def match(self, values):
from ._basic import HasLength
mismatches = []
length_mismatch = Annotate(
"Length mismatch", HasLength(len(self.matchers))).match(values)
if length_mismatch:
mismatches.append(length_mismatch)
for matcher, value in zip(self.matchers, values):
mismatch = matcher.match(value)
if mismatch:
if self.first_only:
return mismatch
mismatches.append(mismatch)
if mismatches:
return MismatchesAll(mismatches)
class MatchesStructure(object):
"""Matcher that matches an object structurally.
'Structurally' here means that attributes of the object being matched are
compared against given matchers.
`fromExample` allows the creation of a matcher from a prototype object and
then modified versions can be created with `update`.
`byEquality` creates a matcher in much the same way as the constructor,
except that the matcher for each of the attributes is assumed to be
`Equals`.
`byMatcher` creates a similar matcher to `byEquality`, but you get to pick
the matcher, rather than just using `Equals`.
"""
def __init__(self, **kwargs):
"""Construct a `MatchesStructure`.
:param kwargs: A mapping of attributes to matchers.
"""
self.kws = kwargs
@classmethod
def byEquality(cls, **kwargs):
"""Matches an object where the attributes equal the keyword values.
Similar to the constructor, except that the matcher is assumed to be
Equals.
"""
from ._basic import Equals
return cls.byMatcher(Equals, **kwargs)
@classmethod
def byMatcher(cls, matcher, **kwargs):
"""Matches an object where the attributes match the keyword values.
Similar to the constructor, except that the provided matcher is used
to match all of the values.
"""
return cls(**map_values(matcher, kwargs))
@classmethod
def fromExample(cls, example, *attributes):
from ._basic import Equals
kwargs = {}
for attr in attributes:
kwargs[attr] = Equals(getattr(example, attr))
return cls(**kwargs)
def update(self, **kws):
new_kws = self.kws.copy()
for attr, matcher in kws.items():
if matcher is None:
new_kws.pop(attr, None)
else:
new_kws[attr] = matcher
return type(self)(**new_kws)
def __str__(self):
kws = []
for attr, matcher in sorted(self.kws.items()):
kws.append("%s=%s" % (attr, matcher))
return "%s(%s)" % (self.__class__.__name__, ', '.join(kws))
def match(self, value):
matchers = []
values = []
for attr, matcher in sorted(self.kws.items()):
matchers.append(Annotate(attr, matcher))
values.append(getattr(value, attr))
return MatchesListwise(matchers).match(values)
class MatchesSetwise(object):
"""Matches if all the matchers match elements of the value being matched.
That is, each element in the 'observed' set must match exactly one matcher
from the set of matchers, with no matchers left over.
The difference compared to `MatchesListwise` is that the order of the
matchings does not matter.
"""
def __init__(self, *matchers):
self.matchers = matchers
def match(self, observed):
remaining_matchers = set(self.matchers)
not_matched = []
for value in observed:
for matcher in remaining_matchers:
if matcher.match(value) is None:
remaining_matchers.remove(matcher)
break
else:
not_matched.append(value)
if not_matched or remaining_matchers:
remaining_matchers = list(remaining_matchers)
# There are various cases that all should be reported somewhat
# differently.
# There are two trivial cases:
# 1) There are just some matchers left over.
# 2) There are just some values left over.
# Then there are three more interesting cases:
# 3) There are the same number of matchers and values left over.
# 4) There are more matchers left over than values.
# 5) There are more values left over than matchers.
if len(not_matched) == 0:
if len(remaining_matchers) > 1:
msg = "There were %s matchers left over: " % (
len(remaining_matchers),)
else:
msg = "There was 1 matcher left over: "
msg += ', '.join(map(str, remaining_matchers))
return Mismatch(msg)
elif len(remaining_matchers) == 0:
if len(not_matched) > 1:
return Mismatch(
"There were %s values left over: %s" % (
len(not_matched), not_matched))
else:
return Mismatch(
"There was 1 value left over: %s" % (
not_matched, ))
else:
common_length = min(len(remaining_matchers), len(not_matched))
if common_length == 0:
raise AssertionError("common_length can't be 0 here")
if common_length > 1:
msg = "There were %s mismatches" % (common_length,)
else:
msg = "There was 1 mismatch"
if len(remaining_matchers) > len(not_matched):
extra_matchers = remaining_matchers[common_length:]
msg += " and %s extra matcher" % (len(extra_matchers), )
if len(extra_matchers) > 1:
msg += "s"
msg += ': ' + ', '.join(map(str, extra_matchers))
elif len(not_matched) > len(remaining_matchers):
extra_values = not_matched[common_length:]
msg += " and %s extra value" % (len(extra_values), )
if len(extra_values) > 1:
msg += "s"
msg += ': ' + str(extra_values)
return Annotate(
msg, MatchesListwise(remaining_matchers[:common_length])
).match(not_matched[:common_length])

View File

@@ -1,261 +0,0 @@
# Copyright (c) 2009-2012 testtools developers. See LICENSE for details.
__all__ = [
'KeysEqual',
]
from ..helpers import (
dict_subtract,
filter_values,
map_values,
)
from ._higherorder import (
AnnotatedMismatch,
PrefixedMismatch,
MismatchesAll,
)
from ._impl import Matcher, Mismatch
def LabelledMismatches(mismatches, details=None):
"""A collection of mismatches, each labelled."""
return MismatchesAll(
(PrefixedMismatch(k, v) for (k, v) in sorted(mismatches.items())),
wrap=False)
class MatchesAllDict(Matcher):
"""Matches if all of the matchers it is created with match.
A lot like ``MatchesAll``, but takes a dict of Matchers and labels any
mismatches with the key of the dictionary.
"""
def __init__(self, matchers):
super(MatchesAllDict, self).__init__()
self.matchers = matchers
def __str__(self):
return 'MatchesAllDict(%s)' % (_format_matcher_dict(self.matchers),)
def match(self, observed):
mismatches = {}
for label in self.matchers:
mismatches[label] = self.matchers[label].match(observed)
return _dict_to_mismatch(
mismatches, result_mismatch=LabelledMismatches)
class DictMismatches(Mismatch):
"""A mismatch with a dict of child mismatches."""
def __init__(self, mismatches, details=None):
super(DictMismatches, self).__init__(None, details=details)
self.mismatches = mismatches
def describe(self):
lines = ['{']
lines.extend(
[' %r: %s,' % (key, mismatch.describe())
for (key, mismatch) in sorted(self.mismatches.items())])
lines.append('}')
return '\n'.join(lines)
def _dict_to_mismatch(data, to_mismatch=None,
result_mismatch=DictMismatches):
if to_mismatch:
data = map_values(to_mismatch, data)
mismatches = filter_values(bool, data)
if mismatches:
return result_mismatch(mismatches)
class _MatchCommonKeys(Matcher):
"""Match on keys in a dictionary.
Given a dictionary where the values are matchers, this will look for
common keys in the matched dictionary and match if and only if all common
keys match the given matchers.
Thus::
>>> structure = {'a': Equals('x'), 'b': Equals('y')}
>>> _MatchCommonKeys(structure).match({'a': 'x', 'c': 'z'})
None
"""
def __init__(self, dict_of_matchers):
super(_MatchCommonKeys, self).__init__()
self._matchers = dict_of_matchers
def _compare_dicts(self, expected, observed):
common_keys = set(expected.keys()) & set(observed.keys())
mismatches = {}
for key in common_keys:
mismatch = expected[key].match(observed[key])
if mismatch:
mismatches[key] = mismatch
return mismatches
def match(self, observed):
mismatches = self._compare_dicts(self._matchers, observed)
if mismatches:
return DictMismatches(mismatches)
class _SubDictOf(Matcher):
"""Matches if the matched dict only has keys that are in given dict."""
def __init__(self, super_dict, format_value=repr):
super(_SubDictOf, self).__init__()
self.super_dict = super_dict
self.format_value = format_value
def match(self, observed):
excess = dict_subtract(observed, self.super_dict)
return _dict_to_mismatch(
excess, lambda v: Mismatch(self.format_value(v)))
class _SuperDictOf(Matcher):
"""Matches if all of the keys in the given dict are in the matched dict.
"""
def __init__(self, sub_dict, format_value=repr):
super(_SuperDictOf, self).__init__()
self.sub_dict = sub_dict
self.format_value = format_value
def match(self, super_dict):
return _SubDictOf(super_dict, self.format_value).match(self.sub_dict)
def _format_matcher_dict(matchers):
return '{%s}' % (
', '.join(sorted('%r: %s' % (k, v) for k, v in matchers.items())))
class _CombinedMatcher(Matcher):
"""Many matchers labelled and combined into one uber-matcher.
Subclass this and then specify a dict of matcher factories that take a
single 'expected' value and return a matcher. The subclass will match
only if all of the matchers made from factories match.
Not **entirely** dissimilar from ``MatchesAll``.
"""
matcher_factories = {}
def __init__(self, expected):
super(_CombinedMatcher, self).__init__()
self._expected = expected
def format_expected(self, expected):
return repr(expected)
def __str__(self):
return '%s(%s)' % (
self.__class__.__name__, self.format_expected(self._expected))
def match(self, observed):
matchers = dict(
(k, v(self._expected)) for k, v in self.matcher_factories.items())
return MatchesAllDict(matchers).match(observed)
class MatchesDict(_CombinedMatcher):
"""Match a dictionary exactly, by its keys.
Specify a dictionary mapping keys (often strings) to matchers. This is
the 'expected' dict. Any dictionary that matches this must have exactly
the same keys, and the values must match the corresponding matchers in the
expected dict.
"""
matcher_factories = {
'Extra': _SubDictOf,
'Missing': lambda m: _SuperDictOf(m, format_value=str),
'Differences': _MatchCommonKeys,
}
format_expected = lambda self, expected: _format_matcher_dict(expected)
class ContainsDict(_CombinedMatcher):
"""Match a dictionary for that contains a specified sub-dictionary.
Specify a dictionary mapping keys (often strings) to matchers. This is
the 'expected' dict. Any dictionary that matches this must have **at
least** these keys, and the values must match the corresponding matchers
in the expected dict. Dictionaries that have more keys will also match.
In other words, any matching dictionary must contain the dictionary given
to the constructor.
Does not check for strict sub-dictionary. That is, equal dictionaries
match.
"""
matcher_factories = {
'Missing': lambda m: _SuperDictOf(m, format_value=str),
'Differences': _MatchCommonKeys,
}
format_expected = lambda self, expected: _format_matcher_dict(expected)
class ContainedByDict(_CombinedMatcher):
"""Match a dictionary for which this is a super-dictionary.
Specify a dictionary mapping keys (often strings) to matchers. This is
the 'expected' dict. Any dictionary that matches this must have **only**
these keys, and the values must match the corresponding matchers in the
expected dict. Dictionaries that have fewer keys can also match.
In other words, any matching dictionary must be contained by the
dictionary given to the constructor.
Does not check for strict super-dictionary. That is, equal dictionaries
match.
"""
matcher_factories = {
'Extra': _SubDictOf,
'Differences': _MatchCommonKeys,
}
format_expected = lambda self, expected: _format_matcher_dict(expected)
class KeysEqual(Matcher):
"""Checks whether a dict has particular keys."""
def __init__(self, *expected):
"""Create a `KeysEqual` Matcher.
:param expected: The keys the matchee is expected to have. As a
special case, if a single argument is specified, and it is a
mapping, then we use its keys as the expected set.
"""
super(KeysEqual, self).__init__()
if len(expected) == 1:
try:
expected = expected[0].keys()
except AttributeError:
pass
self.expected = list(expected)
def __str__(self):
return "KeysEqual(%s)" % ', '.join(map(repr, self.expected))
def match(self, matchee):
from ._basic import _BinaryMismatch, Equals
expected = sorted(self.expected)
matched = Equals(expected).match(sorted(matchee.keys()))
if matched:
return AnnotatedMismatch(
'Keys not equal',
_BinaryMismatch(expected, 'does not match', matchee))
return None

View File

@@ -1,104 +0,0 @@
# Copyright (c) 2009-2012 testtools developers. See LICENSE for details.
__all__ = [
'DocTestMatches',
]
import doctest
import re
from ..compat import str_is_unicode
from ._impl import Mismatch
class _NonManglingOutputChecker(doctest.OutputChecker):
"""Doctest checker that works with unicode rather than mangling strings
This is needed because current Python versions have tried to fix string
encoding related problems, but regressed the default behaviour with
unicode inputs in the process.
In Python 2.6 and 2.7 ``OutputChecker.output_difference`` is was changed
to return a bytestring encoded as per ``sys.stdout.encoding``, or utf-8 if
that can't be determined. Worse, that encoding process happens in the
innocent looking `_indent` global function. Because the
`DocTestMismatch.describe` result may well not be destined for printing to
stdout, this is no good for us. To get a unicode return as before, the
method is monkey patched if ``doctest._encoding`` exists.
Python 3 has a different problem. For some reason both inputs are encoded
to ascii with 'backslashreplace', making an escaped string matches its
unescaped form. Overriding the offending ``OutputChecker._toAscii`` method
is sufficient to revert this.
"""
def _toAscii(self, s):
"""Return ``s`` unchanged rather than mangling it to ascii"""
return s
# Only do this overriding hackery if doctest has a broken _input function
if getattr(doctest, "_encoding", None) is not None:
from types import FunctionType as __F
__f = doctest.OutputChecker.output_difference.im_func
__g = dict(__f.func_globals)
def _indent(s, indent=4, _pattern=re.compile("^(?!$)", re.MULTILINE)):
"""Prepend non-empty lines in ``s`` with ``indent`` number of spaces"""
return _pattern.sub(indent*" ", s)
__g["_indent"] = _indent
output_difference = __F(__f.func_code, __g, "output_difference")
del __F, __f, __g, _indent
class DocTestMatches(object):
"""See if a string matches a doctest example."""
def __init__(self, example, flags=0):
"""Create a DocTestMatches to match example.
:param example: The example to match e.g. 'foo bar baz'
:param flags: doctest comparison flags to match on. e.g.
doctest.ELLIPSIS.
"""
if not example.endswith('\n'):
example += '\n'
self.want = example # required variable name by doctest.
self.flags = flags
self._checker = _NonManglingOutputChecker()
def __str__(self):
if self.flags:
flagstr = ", flags=%d" % self.flags
else:
flagstr = ""
return 'DocTestMatches(%r%s)' % (self.want, flagstr)
def _with_nl(self, actual):
result = self.want.__class__(actual)
if not result.endswith('\n'):
result += '\n'
return result
def match(self, actual):
with_nl = self._with_nl(actual)
if self._checker.check_output(self.want, with_nl, self.flags):
return None
return DocTestMismatch(self, with_nl)
def _describe_difference(self, with_nl):
return self._checker.output_difference(self, with_nl, self.flags)
class DocTestMismatch(Mismatch):
"""Mismatch object for DocTestMatches."""
def __init__(self, matcher, with_nl):
self.matcher = matcher
self.with_nl = with_nl
def describe(self):
s = self.matcher._describe_difference(self.with_nl)
if str_is_unicode or isinstance(s, unicode):
return s
# GZ 2011-08-24: This is actually pretty bogus, most C0 codes should
# be escaped, in addition to non-ascii bytes.
return s.decode("latin1").encode("ascii", "backslashreplace")

View File

@@ -1,136 +0,0 @@
# Copyright (c) 2009-2012 testtools developers. See LICENSE for details.
__all__ = [
'MatchesException',
'Raises',
'raises',
]
import sys
from testtools.compat import (
classtypes,
istext,
)
from ._basic import MatchesRegex
from ._higherorder import AfterPreproccessing
from ._impl import (
Matcher,
Mismatch,
)
_error_repr = BaseException.__repr__
def _is_exception(exc):
return isinstance(exc, BaseException)
def _is_user_exception(exc):
return isinstance(exc, Exception)
class MatchesException(Matcher):
"""Match an exc_info tuple against an exception instance or type."""
def __init__(self, exception, value_re=None):
"""Create a MatchesException that will match exc_info's for exception.
:param exception: Either an exception instance or type.
If an instance is given, the type and arguments of the exception
are checked. If a type is given only the type of the exception is
checked. If a tuple is given, then as with isinstance, any of the
types in the tuple matching is sufficient to match.
:param value_re: If 'exception' is a type, and the matchee exception
is of the right type, then match against this. If value_re is a
string, then assume value_re is a regular expression and match
the str() of the exception against it. Otherwise, assume value_re
is a matcher, and match the exception against it.
"""
Matcher.__init__(self)
self.expected = exception
if istext(value_re):
value_re = AfterPreproccessing(str, MatchesRegex(value_re), False)
self.value_re = value_re
expected_type = type(self.expected)
self._is_instance = not any(issubclass(expected_type, class_type)
for class_type in classtypes() + (tuple,))
def match(self, other):
if type(other) != tuple:
return Mismatch('%r is not an exc_info tuple' % other)
expected_class = self.expected
if self._is_instance:
expected_class = expected_class.__class__
if not issubclass(other[0], expected_class):
return Mismatch('%r is not a %r' % (other[0], expected_class))
if self._is_instance:
if other[1].args != self.expected.args:
return Mismatch('%s has different arguments to %s.' % (
_error_repr(other[1]), _error_repr(self.expected)))
elif self.value_re is not None:
return self.value_re.match(other[1])
def __str__(self):
if self._is_instance:
return "MatchesException(%s)" % _error_repr(self.expected)
return "MatchesException(%s)" % repr(self.expected)
class Raises(Matcher):
"""Match if the matchee raises an exception when called.
Exceptions which are not subclasses of Exception propagate out of the
Raises.match call unless they are explicitly matched.
"""
def __init__(self, exception_matcher=None):
"""Create a Raises matcher.
:param exception_matcher: Optional validator for the exception raised
by matchee. If supplied the exc_info tuple for the exception raised
is passed into that matcher. If no exception_matcher is supplied
then the simple fact of raising an exception is considered enough
to match on.
"""
self.exception_matcher = exception_matcher
def match(self, matchee):
try:
result = matchee()
return Mismatch('%r returned %r' % (matchee, result))
# Catch all exceptions: Raises() should be able to match a
# KeyboardInterrupt or SystemExit.
except:
exc_info = sys.exc_info()
if self.exception_matcher:
mismatch = self.exception_matcher.match(exc_info)
if not mismatch:
del exc_info
return
else:
mismatch = None
# The exception did not match, or no explicit matching logic was
# performed. If the exception is a non-user exception then
# propagate it.
exception = exc_info[1]
if _is_exception(exception) and not _is_user_exception(exception):
del exc_info
raise
return mismatch
def __str__(self):
return 'Raises()'
def raises(exception):
"""Make a matcher that checks that a callable raises an exception.
This is a convenience function, exactly equivalent to::
return Raises(MatchesException(exception))
See `Raises` and `MatchesException` for more information.
"""
return Raises(MatchesException(exception))

View File

@@ -1,192 +0,0 @@
# Copyright (c) 2009-2012 testtools developers. See LICENSE for details.
"""Matchers for things related to the filesystem."""
__all__ = [
'FileContains',
'DirExists',
'FileExists',
'HasPermissions',
'PathExists',
'SamePath',
'TarballContains',
]
import os
import tarfile
from ._basic import Equals
from ._higherorder import (
MatchesAll,
MatchesPredicate,
)
from ._impl import (
Matcher,
)
def PathExists():
"""Matches if the given path exists.
Use like this::
assertThat('/some/path', PathExists())
"""
return MatchesPredicate(os.path.exists, "%s does not exist.")
def DirExists():
"""Matches if the path exists and is a directory."""
return MatchesAll(
PathExists(),
MatchesPredicate(os.path.isdir, "%s is not a directory."),
first_only=True)
def FileExists():
"""Matches if the given path exists and is a file."""
return MatchesAll(
PathExists(),
MatchesPredicate(os.path.isfile, "%s is not a file."),
first_only=True)
class DirContains(Matcher):
"""Matches if the given directory contains files with the given names.
That is, is the directory listing exactly equal to the given files?
"""
def __init__(self, filenames=None, matcher=None):
"""Construct a ``DirContains`` matcher.
Can be used in a basic mode where the whole directory listing is
matched against an expected directory listing (by passing
``filenames``). Can also be used in a more advanced way where the
whole directory listing is matched against an arbitrary matcher (by
passing ``matcher`` instead).
:param filenames: If specified, match the sorted directory listing
against this list of filenames, sorted.
:param matcher: If specified, match the sorted directory listing
against this matcher.
"""
if filenames == matcher == None:
raise AssertionError(
"Must provide one of `filenames` or `matcher`.")
if None not in (filenames, matcher):
raise AssertionError(
"Must provide either `filenames` or `matcher`, not both.")
if filenames is None:
self.matcher = matcher
else:
self.matcher = Equals(sorted(filenames))
def match(self, path):
mismatch = DirExists().match(path)
if mismatch is not None:
return mismatch
return self.matcher.match(sorted(os.listdir(path)))
class FileContains(Matcher):
"""Matches if the given file has the specified contents."""
def __init__(self, contents=None, matcher=None):
"""Construct a ``FileContains`` matcher.
Can be used in a basic mode where the file contents are compared for
equality against the expected file contents (by passing ``contents``).
Can also be used in a more advanced way where the file contents are
matched against an arbitrary matcher (by passing ``matcher`` instead).
:param contents: If specified, match the contents of the file with
these contents.
:param matcher: If specified, match the contents of the file against
this matcher.
"""
if contents == matcher == None:
raise AssertionError(
"Must provide one of `contents` or `matcher`.")
if None not in (contents, matcher):
raise AssertionError(
"Must provide either `contents` or `matcher`, not both.")
if matcher is None:
self.matcher = Equals(contents)
else:
self.matcher = matcher
def match(self, path):
mismatch = PathExists().match(path)
if mismatch is not None:
return mismatch
f = open(path)
try:
actual_contents = f.read()
return self.matcher.match(actual_contents)
finally:
f.close()
def __str__(self):
return "File at path exists and contains %s" % self.contents
class HasPermissions(Matcher):
"""Matches if a file has the given permissions.
Permissions are specified and matched as a four-digit octal string.
"""
def __init__(self, octal_permissions):
"""Construct a HasPermissions matcher.
:param octal_permissions: A four digit octal string, representing the
intended access permissions. e.g. '0775' for rwxrwxr-x.
"""
super(HasPermissions, self).__init__()
self.octal_permissions = octal_permissions
def match(self, filename):
permissions = oct(os.stat(filename).st_mode)[-4:]
return Equals(self.octal_permissions).match(permissions)
class SamePath(Matcher):
"""Matches if two paths are the same.
That is, the paths are equal, or they point to the same file but in
different ways. The paths do not have to exist.
"""
def __init__(self, path):
super(SamePath, self).__init__()
self.path = path
def match(self, other_path):
f = lambda x: os.path.abspath(os.path.realpath(x))
return Equals(f(self.path)).match(f(other_path))
class TarballContains(Matcher):
"""Matches if the given tarball contains the given paths.
Uses TarFile.getnames() to get the paths out of the tarball.
"""
def __init__(self, paths):
super(TarballContains, self).__init__()
self.paths = paths
self.path_matcher = Equals(sorted(self.paths))
def match(self, tarball_path):
# Open underlying file first to ensure it's always closed:
# <http://bugs.python.org/issue10233>
f = open(tarball_path, "rb")
try:
tarball = tarfile.open(tarball_path, fileobj=f)
try:
return self.path_matcher.match(sorted(tarball.getnames()))
finally:
tarball.close()
finally:
f.close()

View File

@@ -1,368 +0,0 @@
# Copyright (c) 2009-2012 testtools developers. See LICENSE for details.
__all__ = [
'AfterPreprocessing',
'AllMatch',
'Annotate',
'AnyMatch',
'MatchesAny',
'MatchesAll',
'Not',
]
import types
from ._impl import (
Matcher,
Mismatch,
MismatchDecorator,
)
class MatchesAny(object):
"""Matches if any of the matchers it is created with match."""
def __init__(self, *matchers):
self.matchers = matchers
def match(self, matchee):
results = []
for matcher in self.matchers:
mismatch = matcher.match(matchee)
if mismatch is None:
return None
results.append(mismatch)
return MismatchesAll(results)
def __str__(self):
return "MatchesAny(%s)" % ', '.join([
str(matcher) for matcher in self.matchers])
class MatchesAll(object):
"""Matches if all of the matchers it is created with match."""
def __init__(self, *matchers, **options):
"""Construct a MatchesAll matcher.
Just list the component matchers as arguments in the ``*args``
style. If you want only the first mismatch to be reported, past in
first_only=True as a keyword argument. By default, all mismatches are
reported.
"""
self.matchers = matchers
self.first_only = options.get('first_only', False)
def __str__(self):
return 'MatchesAll(%s)' % ', '.join(map(str, self.matchers))
def match(self, matchee):
results = []
for matcher in self.matchers:
mismatch = matcher.match(matchee)
if mismatch is not None:
if self.first_only:
return mismatch
results.append(mismatch)
if results:
return MismatchesAll(results)
else:
return None
class MismatchesAll(Mismatch):
"""A mismatch with many child mismatches."""
def __init__(self, mismatches, wrap=True):
self.mismatches = mismatches
self._wrap = wrap
def describe(self):
descriptions = []
if self._wrap:
descriptions = ["Differences: ["]
for mismatch in self.mismatches:
descriptions.append(mismatch.describe())
if self._wrap:
descriptions.append("]")
return '\n'.join(descriptions)
class Not(object):
"""Inverts a matcher."""
def __init__(self, matcher):
self.matcher = matcher
def __str__(self):
return 'Not(%s)' % (self.matcher,)
def match(self, other):
mismatch = self.matcher.match(other)
if mismatch is None:
return MatchedUnexpectedly(self.matcher, other)
else:
return None
class MatchedUnexpectedly(Mismatch):
"""A thing matched when it wasn't supposed to."""
def __init__(self, matcher, other):
self.matcher = matcher
self.other = other
def describe(self):
return "%r matches %s" % (self.other, self.matcher)
class Annotate(object):
"""Annotates a matcher with a descriptive string.
Mismatches are then described as '<mismatch>: <annotation>'.
"""
def __init__(self, annotation, matcher):
self.annotation = annotation
self.matcher = matcher
@classmethod
def if_message(cls, annotation, matcher):
"""Annotate ``matcher`` only if ``annotation`` is non-empty."""
if not annotation:
return matcher
return cls(annotation, matcher)
def __str__(self):
return 'Annotate(%r, %s)' % (self.annotation, self.matcher)
def match(self, other):
mismatch = self.matcher.match(other)
if mismatch is not None:
return AnnotatedMismatch(self.annotation, mismatch)
class PostfixedMismatch(MismatchDecorator):
"""A mismatch annotated with a descriptive string."""
def __init__(self, annotation, mismatch):
super(PostfixedMismatch, self).__init__(mismatch)
self.annotation = annotation
self.mismatch = mismatch
def describe(self):
return '%s: %s' % (self.original.describe(), self.annotation)
AnnotatedMismatch = PostfixedMismatch
class PrefixedMismatch(MismatchDecorator):
def __init__(self, prefix, mismatch):
super(PrefixedMismatch, self).__init__(mismatch)
self.prefix = prefix
def describe(self):
return '%s: %s' % (self.prefix, self.original.describe())
class AfterPreprocessing(object):
"""Matches if the value matches after passing through a function.
This can be used to aid in creating trivial matchers as functions, for
example::
def PathHasFileContent(content):
def _read(path):
return open(path).read()
return AfterPreprocessing(_read, Equals(content))
"""
def __init__(self, preprocessor, matcher, annotate=True):
"""Create an AfterPreprocessing matcher.
:param preprocessor: A function called with the matchee before
matching.
:param matcher: What to match the preprocessed matchee against.
:param annotate: Whether or not to annotate the matcher with
something explaining how we transformed the matchee. Defaults
to True.
"""
self.preprocessor = preprocessor
self.matcher = matcher
self.annotate = annotate
def _str_preprocessor(self):
if isinstance(self.preprocessor, types.FunctionType):
return '<function %s>' % self.preprocessor.__name__
return str(self.preprocessor)
def __str__(self):
return "AfterPreprocessing(%s, %s)" % (
self._str_preprocessor(), self.matcher)
def match(self, value):
after = self.preprocessor(value)
if self.annotate:
matcher = Annotate(
"after %s on %r" % (self._str_preprocessor(), value),
self.matcher)
else:
matcher = self.matcher
return matcher.match(after)
# This is the old, deprecated. spelling of the name, kept for backwards
# compatibility.
AfterPreproccessing = AfterPreprocessing
class AllMatch(object):
"""Matches if all provided values match the given matcher."""
def __init__(self, matcher):
self.matcher = matcher
def __str__(self):
return 'AllMatch(%s)' % (self.matcher,)
def match(self, values):
mismatches = []
for value in values:
mismatch = self.matcher.match(value)
if mismatch:
mismatches.append(mismatch)
if mismatches:
return MismatchesAll(mismatches)
class AnyMatch(object):
"""Matches if any of the provided values match the given matcher."""
def __init__(self, matcher):
self.matcher = matcher
def __str__(self):
return 'AnyMatch(%s)' % (self.matcher,)
def match(self, values):
mismatches = []
for value in values:
mismatch = self.matcher.match(value)
if mismatch:
mismatches.append(mismatch)
else:
return None
return MismatchesAll(mismatches)
class MatchesPredicate(Matcher):
"""Match if a given function returns True.
It is reasonably common to want to make a very simple matcher based on a
function that you already have that returns True or False given a single
argument (i.e. a predicate function). This matcher makes it very easy to
do so. e.g.::
IsEven = MatchesPredicate(lambda x: x % 2 == 0, '%s is not even')
self.assertThat(4, IsEven)
"""
def __init__(self, predicate, message):
"""Create a ``MatchesPredicate`` matcher.
:param predicate: A function that takes a single argument and returns
a value that will be interpreted as a boolean.
:param message: A message to describe a mismatch. It will be formatted
with '%' and be given whatever was passed to ``match()``. Thus, it
needs to contain exactly one thing like '%s', '%d' or '%f'.
"""
self.predicate = predicate
self.message = message
def __str__(self):
return '%s(%r, %r)' % (
self.__class__.__name__, self.predicate, self.message)
def match(self, x):
if not self.predicate(x):
return Mismatch(self.message % x)
def MatchesPredicateWithParams(predicate, message, name=None):
"""Match if a given parameterised function returns True.
It is reasonably common to want to make a very simple matcher based on a
function that you already have that returns True or False given some
arguments. This matcher makes it very easy to do so. e.g.::
HasLength = MatchesPredicate(
lambda x, y: len(x) == y, 'len({0}) is not {1}')
# This assertion will fail, as 'len([1, 2]) == 3' is False.
self.assertThat([1, 2], HasLength(3))
Note that unlike MatchesPredicate MatchesPredicateWithParams returns a
factory which you then customise to use by constructing an actual matcher
from it.
The predicate function should take the object to match as its first
parameter. Any additional parameters supplied when constructing a matcher
are supplied to the predicate as additional parameters when checking for a
match.
:param predicate: The predicate function.
:param message: A format string for describing mis-matches.
:param name: Optional replacement name for the matcher.
"""
def construct_matcher(*args, **kwargs):
return _MatchesPredicateWithParams(
predicate, message, name, *args, **kwargs)
return construct_matcher
class _MatchesPredicateWithParams(Matcher):
def __init__(self, predicate, message, name, *args, **kwargs):
"""Create a ``MatchesPredicateWithParams`` matcher.
:param predicate: A function that takes an object to match and
additional params as given in ``*args`` and ``**kwargs``. The
result of the function will be interpreted as a boolean to
determine a match.
:param message: A message to describe a mismatch. It will be formatted
with .format() and be given a tuple containing whatever was passed
to ``match()`` + ``*args`` in ``*args``, and whatever was passed to
``**kwargs`` as its ``**kwargs``.
For instance, to format a single parameter::
"{0} is not a {1}"
To format a keyword arg::
"{0} is not a {type_to_check}"
:param name: What name to use for the matcher class. Pass None to use
the default.
"""
self.predicate = predicate
self.message = message
self.name = name
self.args = args
self.kwargs = kwargs
def __str__(self):
args = [str(arg) for arg in self.args]
kwargs = ["%s=%s" % item for item in self.kwargs.items()]
args = ", ".join(args + kwargs)
if self.name is None:
name = 'MatchesPredicateWithParams(%r, %r)' % (
self.predicate, self.message)
else:
name = self.name
return '%s(%s)' % (name, args)
def match(self, x):
if not self.predicate(x, *self.args, **self.kwargs):
return Mismatch(
self.message.format(*((x,) + self.args), **self.kwargs))

View File

@@ -1,173 +0,0 @@
# Copyright (c) 2009-2012 testtools developers. See LICENSE for details.
"""Matchers, a way to express complex assertions outside the testcase.
Inspired by 'hamcrest'.
Matcher provides the abstract API that all matchers need to implement.
Bundled matchers are listed in __all__: a list can be obtained by running
$ python -c 'import testtools.matchers; print testtools.matchers.__all__'
"""
__all__ = [
'Matcher',
'Mismatch',
'MismatchDecorator',
'MismatchError',
]
from testtools.compat import (
_isbytes,
istext,
str_is_unicode,
text_repr
)
class Matcher(object):
"""A pattern matcher.
A Matcher must implement match and __str__ to be used by
testtools.TestCase.assertThat. Matcher.match(thing) returns None when
thing is completely matched, and a Mismatch object otherwise.
Matchers can be useful outside of test cases, as they are simply a
pattern matching language expressed as objects.
testtools.matchers is inspired by hamcrest, but is pythonic rather than
a Java transcription.
"""
def match(self, something):
"""Return None if this matcher matches something, a Mismatch otherwise.
"""
raise NotImplementedError(self.match)
def __str__(self):
"""Get a sensible human representation of the matcher.
This should include the parameters given to the matcher and any
state that would affect the matches operation.
"""
raise NotImplementedError(self.__str__)
class Mismatch(object):
"""An object describing a mismatch detected by a Matcher."""
def __init__(self, description=None, details=None):
"""Construct a `Mismatch`.
:param description: A description to use. If not provided,
`Mismatch.describe` must be implemented.
:param details: Extra details about the mismatch. Defaults
to the empty dict.
"""
if description:
self._description = description
if details is None:
details = {}
self._details = details
def describe(self):
"""Describe the mismatch.
This should be either a human-readable string or castable to a string.
In particular, is should either be plain ascii or unicode on Python 2,
and care should be taken to escape control characters.
"""
try:
return self._description
except AttributeError:
raise NotImplementedError(self.describe)
def get_details(self):
"""Get extra details about the mismatch.
This allows the mismatch to provide extra information beyond the basic
description, including large text or binary files, or debugging internals
without having to force it to fit in the output of 'describe'.
The testtools assertion assertThat will query get_details and attach
all its values to the test, permitting them to be reported in whatever
manner the test environment chooses.
:return: a dict mapping names to Content objects. name is a string to
name the detail, and the Content object is the detail to add
to the result. For more information see the API to which items from
this dict are passed testtools.TestCase.addDetail.
"""
return getattr(self, '_details', {})
def __repr__(self):
return "<testtools.matchers.Mismatch object at %x attributes=%r>" % (
id(self), self.__dict__)
class MismatchError(AssertionError):
"""Raised when a mismatch occurs."""
# This class exists to work around
# <https://bugs.launchpad.net/testtools/+bug/804127>. It provides a
# guaranteed way of getting a readable exception, no matter what crazy
# characters are in the matchee, matcher or mismatch.
def __init__(self, matchee, matcher, mismatch, verbose=False):
super(MismatchError, self).__init__()
self.matchee = matchee
self.matcher = matcher
self.mismatch = mismatch
self.verbose = verbose
def __str__(self):
difference = self.mismatch.describe()
if self.verbose:
# GZ 2011-08-24: Smelly API? Better to take any object and special
# case text inside?
if istext(self.matchee) or _isbytes(self.matchee):
matchee = text_repr(self.matchee, multiline=False)
else:
matchee = repr(self.matchee)
return (
'Match failed. Matchee: %s\nMatcher: %s\nDifference: %s\n'
% (matchee, self.matcher, difference))
else:
return difference
if not str_is_unicode:
__unicode__ = __str__
def __str__(self):
return self.__unicode__().encode("ascii", "backslashreplace")
class MismatchDecorator(object):
"""Decorate a ``Mismatch``.
Forwards all messages to the original mismatch object. Probably the best
way to use this is inherit from this class and then provide your own
custom decoration logic.
"""
def __init__(self, original):
"""Construct a `MismatchDecorator`.
:param original: A `Mismatch` object to decorate.
"""
self.original = original
def __repr__(self):
return '<testtools.matchers.MismatchDecorator(%r)>' % (self.original,)
def describe(self):
return self.original.describe()
def get_details(self):
return self.original.get_details()
# Signal that this is part of the testing framework, and that code from this
# should not normally appear in tracebacks.
__unittest = True

View File

@@ -1,109 +0,0 @@
# Copyright (c) 2009-2016 testtools developers. See LICENSE for details.
__all__ = [
'Warnings',
'WarningMessage',
'IsDeprecated']
import warnings
from ._basic import Is
from ._const import Always
from ._datastructures import MatchesListwise, MatchesStructure
from ._higherorder import (
AfterPreprocessing,
Annotate,
MatchesAll,
Not,
)
from ._impl import Mismatch
def WarningMessage(category_type, message=None, filename=None, lineno=None,
line=None):
r"""
Create a matcher that will match `warnings.WarningMessage`\s.
For example, to match captured `DeprecationWarning`\s with a message about
some ``foo`` being replaced with ``bar``:
.. code-block:: python
WarningMessage(DeprecationWarning,
message=MatchesAll(
Contains('foo is deprecated'),
Contains('use bar instead')))
:param type category_type: A warning type, for example
`DeprecationWarning`.
:param message_matcher: A matcher object that will be evaluated against
warning's message.
:param filename_matcher: A matcher object that will be evaluated against
the warning's filename.
:param lineno_matcher: A matcher object that will be evaluated against the
warning's line number.
:param line_matcher: A matcher object that will be evaluated against the
warning's line of source code.
"""
category_matcher = Is(category_type)
message_matcher = message or Always()
filename_matcher = filename or Always()
lineno_matcher = lineno or Always()
line_matcher = line or Always()
return MatchesStructure(
category=Annotate(
"Warning's category type does not match",
category_matcher),
message=Annotate(
"Warning's message does not match",
AfterPreprocessing(str, message_matcher)),
filename=Annotate(
"Warning's filname does not match",
filename_matcher),
lineno=Annotate(
"Warning's line number does not match",
lineno_matcher),
line=Annotate(
"Warning's source line does not match",
line_matcher))
class Warnings(object):
"""
Match if the matchee produces warnings.
"""
def __init__(self, warnings_matcher=None):
"""
Create a Warnings matcher.
:param warnings_matcher: Optional validator for the warnings emitted by
matchee. If no warnings_matcher is supplied then the simple fact that
at least one warning is emitted is considered enough to match on.
"""
self.warnings_matcher = warnings_matcher
def match(self, matchee):
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter('always')
matchee()
if self.warnings_matcher is not None:
return self.warnings_matcher.match(w)
elif not w:
return Mismatch('Expected at least one warning, got none')
def __str__(self):
return 'Warnings({!s})'.format(self.warnings_matcher)
def IsDeprecated(message):
"""
Make a matcher that checks that a callable produces exactly one
`DeprecationWarning`.
:param message: Matcher for the warning message.
"""
return Warnings(
MatchesListwise([
WarningMessage(
category_type=DeprecationWarning,
message=message)]))

View File

@@ -1,97 +0,0 @@
# Copyright (c) 2010 testtools developers. See LICENSE for details.
"""Helpers for monkey-patching Python code."""
__all__ = [
'MonkeyPatcher',
'patch',
]
class MonkeyPatcher(object):
"""A set of monkey-patches that can be applied and removed all together.
Use this to cover up attributes with new objects. Particularly useful for
testing difficult code.
"""
# Marker used to indicate that the patched attribute did not exist on the
# object before we patched it.
_NO_SUCH_ATTRIBUTE = object()
def __init__(self, *patches):
"""Construct a `MonkeyPatcher`.
:param patches: The patches to apply, each should be (obj, name,
new_value). Providing patches here is equivalent to calling
`add_patch`.
"""
# List of patches to apply in (obj, name, value).
self._patches_to_apply = []
# List of the original values for things that have been patched.
# (obj, name, value) format.
self._originals = []
for patch in patches:
self.add_patch(*patch)
def add_patch(self, obj, name, value):
"""Add a patch to overwrite 'name' on 'obj' with 'value'.
The attribute C{name} on C{obj} will be assigned to C{value} when
C{patch} is called or during C{run_with_patches}.
You can restore the original values with a call to restore().
"""
self._patches_to_apply.append((obj, name, value))
def patch(self):
"""Apply all of the patches that have been specified with `add_patch`.
Reverse this operation using L{restore}.
"""
for obj, name, value in self._patches_to_apply:
original_value = getattr(obj, name, self._NO_SUCH_ATTRIBUTE)
self._originals.append((obj, name, original_value))
setattr(obj, name, value)
def restore(self):
"""Restore all original values to any patched objects.
If the patched attribute did not exist on an object before it was
patched, `restore` will delete the attribute so as to return the
object to its original state.
"""
while self._originals:
obj, name, value = self._originals.pop()
if value is self._NO_SUCH_ATTRIBUTE:
delattr(obj, name)
else:
setattr(obj, name, value)
def run_with_patches(self, f, *args, **kw):
"""Run 'f' with the given args and kwargs with all patches applied.
Restores all objects to their original state when finished.
"""
self.patch()
try:
return f(*args, **kw)
finally:
self.restore()
def patch(obj, attribute, value):
"""Set 'obj.attribute' to 'value' and return a callable to restore 'obj'.
If 'attribute' is not set on 'obj' already, then the returned callable
will delete the attribute when called.
:param obj: An object to monkey-patch.
:param attribute: The name of the attribute to patch.
:param value: The value to set 'obj.attribute' to.
:return: A nullary callable that, when run, will restore 'obj' to its
original state.
"""
patcher = MonkeyPatcher((obj, attribute, value))
patcher.patch()
return patcher.restore

View File

@@ -1,267 +0,0 @@
# Copyright (c) 2009 testtools developers. See LICENSE for details.
"""python -m testtools.run testspec [testspec...]
Run some tests with the testtools extended API.
For instance, to run the testtools test suite.
$ python -m testtools.run testtools.tests.test_suite
"""
import argparse
from functools import partial
import os.path
import sys
from extras import safe_hasattr, try_imports
# To let setup.py work, make this a conditional import.
unittest = try_imports(['unittest2', 'unittest'])
from testtools import TextTestResult, testcase
from testtools.compat import classtypes, istext, unicode_output_stream
from testtools.testsuite import filter_by_ids, iterate_tests, sorted_tests
defaultTestLoader = unittest.defaultTestLoader
defaultTestLoaderCls = unittest.TestLoader
have_discover = True
# This shouldn't really be public - its legacy. Try to set it if we can, and
# if we can't (during installs before unittest2 is installed) just stub it out
# to None.
discover_impl = getattr(unittest, 'loader', None)
# Kept for API compatibility, but no longer used.
BUFFEROUTPUT = ""
CATCHBREAK = ""
FAILFAST = ""
USAGE_AS_MAIN = ""
def list_test(test):
"""Return the test ids that would be run if test() was run.
When things fail to import they can be represented as well, though
we use an ugly hack (see http://bugs.python.org/issue19746 for details)
to determine that. The difference matters because if a user is
filtering tests to run on the returned ids, a failed import can reduce
the visible tests but it can be impossible to tell that the selected
test would have been one of the imported ones.
:return: A tuple of test ids that would run and error strings
describing things that failed to import.
"""
unittest_import_strs = set([
'unittest2.loader.ModuleImportFailure.',
'unittest.loader.ModuleImportFailure.',
'discover.ModuleImportFailure.'
])
test_ids = []
errors = []
for test in iterate_tests(test):
# Much ugly.
for prefix in unittest_import_strs:
if test.id().startswith(prefix):
errors.append(test.id()[len(prefix):])
break
else:
test_ids.append(test.id())
return test_ids, errors
class TestToolsTestRunner(object):
""" A thunk object to support unittest.TestProgram."""
def __init__(self, verbosity=None, failfast=None, buffer=None,
stdout=None, tb_locals=False, **kwargs):
"""Create a TestToolsTestRunner.
:param verbosity: Ignored.
:param failfast: Stop running tests at the first failure.
:param buffer: Ignored.
:param stdout: Stream to use for stdout.
:param tb_locals: If True include local variables in tracebacks.
"""
self.failfast = failfast
if stdout is None:
stdout = sys.stdout
self.stdout = stdout
self.tb_locals = tb_locals
def list(self, test, loader):
"""List the tests that would be run if test() was run."""
test_ids, _ = list_test(test)
for test_id in test_ids:
self.stdout.write('%s\n' % test_id)
errors = loader.errors
if errors:
for test_id in errors:
self.stdout.write('%s\n' % test_id)
sys.exit(2)
def run(self, test):
"Run the given test case or test suite."
result = TextTestResult(
unicode_output_stream(self.stdout), failfast=self.failfast,
tb_locals=self.tb_locals)
result.startTestRun()
try:
return test.run(result)
finally:
result.stopTestRun()
####################
# Taken from python 2.7 and slightly modified for compatibility with
# older versions. Delete when 2.7 is the oldest supported version.
# Modifications:
# - If --catch is given, check that installHandler is available, as
# it won't be on old python versions or python builds without signals.
# - --list has been added which can list tests (should be upstreamed).
# - --load-list has been added which can reduce the tests used (should be
# upstreamed).
class TestProgram(unittest.TestProgram):
"""A command-line program that runs a set of tests; this is primarily
for making test modules conveniently executable.
"""
# defaults for testing
module=None
verbosity = 1
failfast = catchbreak = buffer = progName = None
_discovery_parser = None
def __init__(self, module=__name__, defaultTest=None, argv=None,
testRunner=None, testLoader=defaultTestLoader,
exit=True, verbosity=1, failfast=None, catchbreak=None,
buffer=None, stdout=None, tb_locals=False):
if module == __name__:
self.module = None
elif istext(module):
self.module = __import__(module)
for part in module.split('.')[1:]:
self.module = getattr(self.module, part)
else:
self.module = module
if argv is None:
argv = sys.argv
if stdout is None:
stdout = sys.stdout
self.stdout = stdout
self.exit = exit
self.failfast = failfast
self.catchbreak = catchbreak
self.verbosity = verbosity
self.buffer = buffer
self.tb_locals = tb_locals
self.defaultTest = defaultTest
# XXX: Local edit (see http://bugs.python.org/issue22860)
self.listtests = False
self.load_list = None
self.testRunner = testRunner
self.testLoader = testLoader
progName = argv[0]
if progName.endswith('%srun.py' % os.path.sep):
elements = progName.split(os.path.sep)
progName = '%s.run' % elements[-2]
else:
progName = os.path.basename(argv[0])
self.progName = progName
self.parseArgs(argv)
# XXX: Local edit (see http://bugs.python.org/issue22860)
if self.load_list:
# TODO: preserve existing suites (like testresources does in
# OptimisingTestSuite.add, but with a standard protocol).
# This is needed because the load_tests hook allows arbitrary
# suites, even if that is rarely used.
source = open(self.load_list, 'rb')
try:
lines = source.readlines()
finally:
source.close()
test_ids = set(line.strip().decode('utf-8') for line in lines)
self.test = filter_by_ids(self.test, test_ids)
# XXX: Local edit (see http://bugs.python.org/issue22860)
if not self.listtests:
self.runTests()
else:
runner = self._get_runner()
if safe_hasattr(runner, 'list'):
try:
runner.list(self.test, loader=self.testLoader)
except TypeError:
runner.list(self.test)
else:
for test in iterate_tests(self.test):
self.stdout.write('%s\n' % test.id())
del self.testLoader.errors[:]
def _getParentArgParser(self):
parser = super(TestProgram, self)._getParentArgParser()
# XXX: Local edit (see http://bugs.python.org/issue22860)
parser.add_argument('-l', '--list', dest='listtests', default=False,
action='store_true', help='List tests rather than executing them')
parser.add_argument('--load-list', dest='load_list', default=None,
help='Specifies a file containing test ids, only tests matching '
'those ids are executed')
return parser
def _do_discovery(self, argv, Loader=None):
super(TestProgram, self)._do_discovery(argv, Loader=Loader)
# XXX: Local edit (see http://bugs.python.org/issue22860)
self.test = sorted_tests(self.test)
def runTests(self):
# XXX: Local edit (see http://bugs.python.org/issue22860)
if (self.catchbreak
and getattr(unittest, 'installHandler', None) is not None):
unittest.installHandler()
testRunner = self._get_runner()
self.result = testRunner.run(self.test)
if self.exit:
sys.exit(not self.result.wasSuccessful())
def _get_runner(self):
# XXX: Local edit (see http://bugs.python.org/issue22860)
if self.testRunner is None:
self.testRunner = TestToolsTestRunner
try:
try:
testRunner = self.testRunner(verbosity=self.verbosity,
failfast=self.failfast,
buffer=self.buffer,
stdout=self.stdout,
tb_locals=self.tb_locals)
except TypeError:
# didn't accept the tb_locals parameter
testRunner = self.testRunner(verbosity=self.verbosity,
failfast=self.failfast,
buffer=self.buffer,
stdout=self.stdout)
except TypeError:
# didn't accept the verbosity, buffer, failfast or stdout arguments
# Try with the prior contract
try:
testRunner = self.testRunner(verbosity=self.verbosity,
failfast=self.failfast,
buffer=self.buffer)
except TypeError:
# Now try calling it with defaults
try:
testRunner = self.testRunner()
except TypeError:
# it is assumed to be a TestRunner instance
testRunner = self.testRunner
return testRunner
################
def main(argv, stdout):
program = TestProgram(argv=argv, testRunner=partial(TestToolsTestRunner, stdout=stdout),
stdout=stdout)
if __name__ == '__main__':
main(sys.argv, sys.stdout)

View File

@@ -1,229 +0,0 @@
# Copyright (c) 2009-2010 testtools developers. See LICENSE for details.
"""Individual test case execution."""
__all__ = [
'MultipleExceptions',
'RunTest',
]
import sys
from testtools.testresult import ExtendedToOriginalDecorator
class MultipleExceptions(Exception):
"""Represents many exceptions raised from some operation.
:ivar args: The sys.exc_info() tuples for each exception.
"""
class RunTest(object):
"""An object to run a test.
RunTest objects are used to implement the internal logic involved in
running a test. TestCase.__init__ stores _RunTest as the class of RunTest
to execute. Passing the runTest= parameter to TestCase.__init__ allows a
different RunTest class to be used to execute the test.
Subclassing or replacing RunTest can be useful to add functionality to the
way that tests are run in a given project.
:ivar case: The test case that is to be run.
:ivar result: The result object a case is reporting to.
:ivar handlers: A list of (ExceptionClass, handler_function) for
exceptions that should be caught if raised from the user
code. Exceptions that are caught are checked against this list in
first to last order. There is a catch-all of 'Exception' at the end
of the list, so to add a new exception to the list, insert it at the
front (which ensures that it will be checked before any existing base
classes in the list. If you add multiple exceptions some of which are
subclasses of each other, add the most specific exceptions last (so
they come before their parent classes in the list).
:ivar exception_caught: An object returned when _run_user catches an
exception.
:ivar _exceptions: A list of caught exceptions, used to do the single
reporting of error/failure/skip etc.
"""
def __init__(self, case, handlers=None, last_resort=None):
"""Create a RunTest to run a case.
:param case: A testtools.TestCase test case object.
:param handlers: Exception handlers for this RunTest. These are stored
in self.handlers and can be modified later if needed.
:param last_resort: A handler of last resort: any exception which is
not handled by handlers will cause the last resort handler to be
called as last_resort(exc_info), and then the exception will be
raised - aborting the test run as this is inside the runner
machinery rather than the confined context of the test.
"""
self.case = case
self.handlers = handlers or []
self.exception_caught = object()
self._exceptions = []
self.last_resort = last_resort or (lambda case, result, exc: None)
def run(self, result=None):
"""Run self.case reporting activity to result.
:param result: Optional testtools.TestResult to report activity to.
:return: The result object the test was run against.
"""
if result is None:
actual_result = self.case.defaultTestResult()
actual_result.startTestRun()
else:
actual_result = result
try:
return self._run_one(actual_result)
finally:
if result is None:
actual_result.stopTestRun()
def _run_one(self, result):
"""Run one test reporting to result.
:param result: A testtools.TestResult to report activity to.
This result object is decorated with an ExtendedToOriginalDecorator
to ensure that the latest TestResult API can be used with
confidence by client code.
:return: The result object the test was run against.
"""
return self._run_prepared_result(ExtendedToOriginalDecorator(result))
def _run_prepared_result(self, result):
"""Run one test reporting to result.
:param result: A testtools.TestResult to report activity to.
:return: The result object the test was run against.
"""
result.startTest(self.case)
self.result = result
try:
self._exceptions = []
self.case.__testtools_tb_locals__ = getattr(
result, 'tb_locals', False)
self._run_core()
if self._exceptions:
# One or more caught exceptions, now trigger the test's
# reporting method for just one.
e = self._exceptions.pop()
for exc_class, handler in self.handlers:
if isinstance(e, exc_class):
handler(self.case, self.result, e)
break
else:
self.last_resort(self.case, self.result, e)
raise e
finally:
result.stopTest(self.case)
return result
def _run_core(self):
"""Run the user supplied test code."""
test_method = self.case._get_test_method()
if getattr(test_method, '__unittest_skip__', False):
self.result.addSkip(
self.case,
reason=getattr(test_method, '__unittest_skip_why__', None)
)
return
if self.exception_caught == self._run_user(self.case._run_setup,
self.result):
# Don't run the test method if we failed getting here.
self._run_cleanups(self.result)
return
# Run everything from here on in. If any of the methods raise an
# exception we'll have failed.
failed = False
try:
if self.exception_caught == self._run_user(
self.case._run_test_method, self.result):
failed = True
finally:
try:
if self.exception_caught == self._run_user(
self.case._run_teardown, self.result):
failed = True
finally:
try:
if self.exception_caught == self._run_user(
self._run_cleanups, self.result):
failed = True
finally:
if getattr(self.case, 'force_failure', None):
self._run_user(_raise_force_fail_error)
failed = True
if not failed:
self.result.addSuccess(self.case,
details=self.case.getDetails())
def _run_cleanups(self, result):
"""Run the cleanups that have been added with addCleanup.
See the docstring for addCleanup for more information.
:return: None if all cleanups ran without error,
``exception_caught`` if there was an error.
"""
failing = False
while self.case._cleanups:
function, arguments, keywordArguments = self.case._cleanups.pop()
got_exception = self._run_user(
function, *arguments, **keywordArguments)
if got_exception == self.exception_caught:
failing = True
if failing:
return self.exception_caught
def _run_user(self, fn, *args, **kwargs):
"""Run a user supplied function.
Exceptions are processed by `_got_user_exception`.
:return: Either whatever 'fn' returns or ``exception_caught`` if
'fn' raised an exception.
"""
try:
return fn(*args, **kwargs)
except:
return self._got_user_exception(sys.exc_info())
def _got_user_exception(self, exc_info, tb_label='traceback'):
"""Called when user code raises an exception.
If 'exc_info' is a `MultipleExceptions`, then we recurse into it
unpacking the errors that it's made up from.
:param exc_info: A sys.exc_info() tuple for the user error.
:param tb_label: An optional string label for the error. If
not specified, will default to 'traceback'.
:return: 'exception_caught' if we catch one of the exceptions that
have handlers in 'handlers', otherwise raise the error.
"""
if exc_info[0] is MultipleExceptions:
for sub_exc_info in exc_info[1].args:
self._got_user_exception(sub_exc_info, tb_label)
return self.exception_caught
try:
e = exc_info[1]
self.case.onException(exc_info, tb_label=tb_label)
finally:
del exc_info
self._exceptions.append(e)
# Yes, this means we catch everything - we re-raise KeyBoardInterrupt
# etc later, after tearDown and cleanUp - since those may be cleaning up
# external processes.
return self.exception_caught
def _raise_force_fail_error():
raise AssertionError("Forced Test Failure")
# Signal that this is part of the testing framework, and that code from this
# should not normally appear in tracebacks.
__unittest = True

View File

@@ -1,34 +0,0 @@
# Copyright (c) 2012 testtools developers. See LICENSE for details.
"""Tag support."""
class TagContext(object):
"""A tag context."""
def __init__(self, parent=None):
"""Create a new TagContext.
:param parent: If provided, uses this as the parent context. Any tags
that are current on the parent at the time of construction are
current in this context.
"""
self.parent = parent
self._tags = set()
if parent:
self._tags.update(parent.get_current_tags())
def get_current_tags(self):
"""Return any current tags."""
return set(self._tags)
def change_tags(self, new_tags, gone_tags):
"""Change the tags on this context.
:param new_tags: A set of tags to add to this context.
:param gone_tags: A set of tags to remove from this context.
:return: The tags now current on this context.
"""
self._tags.update(new_tags)
self._tags.difference_update(gone_tags)
return self.get_current_tags()

File diff suppressed because it is too large Load Diff

View File

@@ -1,51 +0,0 @@
# Copyright (c) 2009-2012 testtools developers. See LICENSE for details.
"""Test result objects."""
__all__ = [
'CopyStreamResult',
'ExtendedToOriginalDecorator',
'ExtendedToStreamDecorator',
'MultiTestResult',
'ResourcedToStreamDecorator',
'StreamFailFast',
'StreamResult',
'StreamResultRouter',
'StreamSummary',
'StreamTagger',
'StreamToDict',
'StreamToExtendedDecorator',
'StreamToQueue',
'Tagger',
'TestByTestResult',
'TestControl',
'TestResult',
'TestResultDecorator',
'TextTestResult',
'ThreadsafeForwardingResult',
'TimestampingStreamResult',
]
from testtools.testresult.real import (
CopyStreamResult,
ExtendedToOriginalDecorator,
ExtendedToStreamDecorator,
MultiTestResult,
ResourcedToStreamDecorator,
StreamFailFast,
StreamResult,
StreamResultRouter,
StreamSummary,
StreamTagger,
StreamToDict,
StreamToExtendedDecorator,
StreamToQueue,
Tagger,
TestByTestResult,
TestControl,
TestResult,
TestResultDecorator,
TextTestResult,
ThreadsafeForwardingResult,
TimestampingStreamResult,
)

View File

@@ -1,233 +0,0 @@
# Copyright (c) 2009-2016 testtools developers. See LICENSE for details.
"""Doubles of test result objects, useful for testing unittest code."""
from collections import namedtuple
from testtools.tags import TagContext
__all__ = [
'Python26TestResult',
'Python27TestResult',
'ExtendedTestResult',
'TwistedTestResult',
'StreamResult',
]
class LoggingBase(object):
"""Basic support for logging of results."""
def __init__(self, event_log=None):
if event_log is None:
event_log = []
self._events = event_log
class Python26TestResult(LoggingBase):
"""A precisely python 2.6 like test result, that logs."""
def __init__(self, event_log=None):
super(Python26TestResult, self).__init__(event_log=event_log)
self.shouldStop = False
self._was_successful = True
self.testsRun = 0
def addError(self, test, err):
self._was_successful = False
self._events.append(('addError', test, err))
def addFailure(self, test, err):
self._was_successful = False
self._events.append(('addFailure', test, err))
def addSuccess(self, test):
self._events.append(('addSuccess', test))
def startTest(self, test):
self._events.append(('startTest', test))
self.testsRun += 1
def stop(self):
self.shouldStop = True
def stopTest(self, test):
self._events.append(('stopTest', test))
def wasSuccessful(self):
return self._was_successful
class Python27TestResult(Python26TestResult):
"""A precisely python 2.7 like test result, that logs."""
def __init__(self, event_log=None):
super(Python27TestResult, self).__init__(event_log)
self.failfast = False
def addError(self, test, err):
super(Python27TestResult, self).addError(test, err)
if self.failfast:
self.stop()
def addFailure(self, test, err):
super(Python27TestResult, self).addFailure(test, err)
if self.failfast:
self.stop()
def addExpectedFailure(self, test, err):
self._events.append(('addExpectedFailure', test, err))
def addSkip(self, test, reason):
self._events.append(('addSkip', test, reason))
def addUnexpectedSuccess(self, test):
self._events.append(('addUnexpectedSuccess', test))
if self.failfast:
self.stop()
def startTestRun(self):
self._events.append(('startTestRun',))
def stopTestRun(self):
self._events.append(('stopTestRun',))
class ExtendedTestResult(Python27TestResult):
"""A test result like the proposed extended unittest result API."""
def __init__(self, event_log=None):
super(ExtendedTestResult, self).__init__(event_log)
self._tags = TagContext()
def addError(self, test, err=None, details=None):
self._was_successful = False
self._events.append(('addError', test, err or details))
def addFailure(self, test, err=None, details=None):
self._was_successful = False
self._events.append(('addFailure', test, err or details))
def addExpectedFailure(self, test, err=None, details=None):
self._events.append(('addExpectedFailure', test, err or details))
def addSkip(self, test, reason=None, details=None):
self._events.append(('addSkip', test, reason or details))
def addSuccess(self, test, details=None):
if details:
self._events.append(('addSuccess', test, details))
else:
self._events.append(('addSuccess', test))
def addUnexpectedSuccess(self, test, details=None):
self._was_successful = False
if details is not None:
self._events.append(('addUnexpectedSuccess', test, details))
else:
self._events.append(('addUnexpectedSuccess', test))
def progress(self, offset, whence):
self._events.append(('progress', offset, whence))
def startTestRun(self):
super(ExtendedTestResult, self).startTestRun()
self._was_successful = True
self._tags = TagContext()
def startTest(self, test):
super(ExtendedTestResult, self).startTest(test)
self._tags = TagContext(self._tags)
def stopTest(self, test):
self._tags = self._tags.parent
super(ExtendedTestResult, self).stopTest(test)
@property
def current_tags(self):
return self._tags.get_current_tags()
def tags(self, new_tags, gone_tags):
self._tags.change_tags(new_tags, gone_tags)
self._events.append(('tags', new_tags, gone_tags))
def time(self, time):
self._events.append(('time', time))
def wasSuccessful(self):
return self._was_successful
class TwistedTestResult(LoggingBase):
"""
Emulate the relevant bits of :py:class:`twisted.trial.itrial.IReporter`.
Used to ensure that we can use ``trial`` as a test runner.
"""
def __init__(self, event_log=None):
super(TwistedTestResult, self).__init__(event_log=event_log)
self._was_successful = True
self.testsRun = 0
def startTest(self, test):
self.testsRun += 1
self._events.append(('startTest', test))
def stopTest(self, test):
self._events.append(('stopTest', test))
def addSuccess(self, test):
self._events.append(('addSuccess', test))
def addError(self, test, error):
self._was_successful = False
self._events.append(('addError', test, error))
def addFailure(self, test, error):
self._was_successful = False
self._events.append(('addFailure', test, error))
def addExpectedFailure(self, test, failure, todo=None):
self._events.append(('addExpectedFailure', test, failure))
def addUnexpectedSuccess(self, test, todo=None):
self._events.append(('addUnexpectedSuccess', test))
def addSkip(self, test, reason):
self._events.append(('addSkip', test, reason))
def wasSuccessful(self):
return self._was_successful
def done(self):
pass
class StreamResult(LoggingBase):
"""A StreamResult implementation for testing.
All events are logged to _events.
"""
def startTestRun(self):
self._events.append(('startTestRun',))
def stopTestRun(self):
self._events.append(('stopTestRun',))
def status(self, test_id=None, test_status=None, test_tags=None,
runnable=True, file_name=None, file_bytes=None, eof=False,
mime_type=None, route_code=None, timestamp=None):
self._events.append(
_StatusEvent(
'status', test_id, test_status, test_tags, runnable,
file_name, file_bytes, eof, mime_type, route_code,
timestamp))
# Convenience for easier access to status fields
_StatusEvent = namedtuple(
"_Event", [
"name", "test_id", "test_status", "test_tags", "runnable", "file_name",
"file_bytes", "eof", "mime_type", "route_code", "timestamp"])

File diff suppressed because it is too large Load Diff

View File

@@ -1,51 +0,0 @@
# Copyright (c) 2008-2015 testtools developers. See LICENSE for details.
"""Tests for testtools itself."""
from unittest import TestSuite
import testscenarios
def test_suite():
from testtools.tests import (
matchers,
twistedsupport,
test_assert_that,
test_compat,
test_content,
test_content_type,
test_distutilscmd,
test_fixturesupport,
test_helpers,
test_monkey,
test_run,
test_runtest,
test_tags,
test_testcase,
test_testresult,
test_testsuite,
test_with_with,
)
modules = [
matchers,
twistedsupport,
test_assert_that,
test_compat,
test_content,
test_content_type,
test_distutilscmd,
test_fixturesupport,
test_helpers,
test_monkey,
test_run,
test_runtest,
test_tags,
test_testcase,
test_testresult,
test_testsuite,
test_with_with,
]
suites = map(lambda x: x.test_suite(), modules)
all_tests = TestSuite(suites)
return TestSuite(testscenarios.generate_scenarios(all_tests))

View File

@@ -1,167 +0,0 @@
# Copyright (c) 2008-2016 testtools developers. See LICENSE for details.
"""Helpers for tests."""
__all__ = [
'LoggingResult',
]
import sys
from extras import safe_hasattr
from testtools import TestResult
from testtools.content import StackLinesContent
from testtools.matchers import (
AfterPreprocessing,
Equals,
MatchesDict,
MatchesListwise,
)
from testtools import runtest
# Importing to preserve compatibility.
safe_hasattr
# GZ 2010-08-12: Don't do this, pointlessly creates an exc_info cycle
try:
raise Exception
except Exception:
an_exc_info = sys.exc_info()
# Deprecated: This classes attributes are somewhat non deterministic which
# leads to hard to predict tests (because Python upstream are changing things.
class LoggingResult(TestResult):
"""TestResult that logs its event to a list."""
def __init__(self, log):
self._events = log
super(LoggingResult, self).__init__()
def startTest(self, test):
self._events.append(('startTest', test))
super(LoggingResult, self).startTest(test)
def stop(self):
self._events.append('stop')
super(LoggingResult, self).stop()
def stopTest(self, test):
self._events.append(('stopTest', test))
super(LoggingResult, self).stopTest(test)
def addFailure(self, test, error):
self._events.append(('addFailure', test, error))
super(LoggingResult, self).addFailure(test, error)
def addError(self, test, error):
self._events.append(('addError', test, error))
super(LoggingResult, self).addError(test, error)
def addSkip(self, test, reason):
self._events.append(('addSkip', test, reason))
super(LoggingResult, self).addSkip(test, reason)
def addSuccess(self, test):
self._events.append(('addSuccess', test))
super(LoggingResult, self).addSuccess(test)
def startTestRun(self):
self._events.append('startTestRun')
super(LoggingResult, self).startTestRun()
def stopTestRun(self):
self._events.append('stopTestRun')
super(LoggingResult, self).stopTestRun()
def done(self):
self._events.append('done')
super(LoggingResult, self).done()
def tags(self, new_tags, gone_tags):
self._events.append(('tags', new_tags, gone_tags))
super(LoggingResult, self).tags(new_tags, gone_tags)
def time(self, a_datetime):
self._events.append(('time', a_datetime))
super(LoggingResult, self).time(a_datetime)
def is_stack_hidden():
return StackLinesContent.HIDE_INTERNAL_STACK
def hide_testtools_stack(should_hide=True):
result = StackLinesContent.HIDE_INTERNAL_STACK
StackLinesContent.HIDE_INTERNAL_STACK = should_hide
return result
def run_with_stack_hidden(should_hide, f, *args, **kwargs):
old_should_hide = hide_testtools_stack(should_hide)
try:
return f(*args, **kwargs)
finally:
hide_testtools_stack(old_should_hide)
class FullStackRunTest(runtest.RunTest):
def _run_user(self, fn, *args, **kwargs):
return run_with_stack_hidden(
False,
super(FullStackRunTest, self)._run_user, fn, *args, **kwargs)
class MatchesEvents(object):
"""Match a list of test result events.
Specify events as a data structure. Ordinary Python objects within this
structure will be compared exactly, but you can also use matchers at any
point.
"""
def __init__(self, *expected):
self._expected = expected
def _make_matcher(self, obj):
# This isn't very safe for general use, but is good enough to make
# some tests in this module more readable.
if hasattr(obj, 'match'):
return obj
elif isinstance(obj, tuple) or isinstance(obj, list):
return MatchesListwise(
[self._make_matcher(item) for item in obj])
elif isinstance(obj, dict):
return MatchesDict(dict(
(key, self._make_matcher(value))
for key, value in obj.items()))
else:
return Equals(obj)
def match(self, observed):
matcher = self._make_matcher(self._expected)
return matcher.match(observed)
class AsText(AfterPreprocessing):
"""Match the text of a Content instance."""
def __init__(self, matcher, annotate=True):
super(AsText, self).__init__(
lambda log: log.as_text(), matcher, annotate=annotate)
def raise_(exception):
"""Raise ``exception``.
Useful for raising exceptions when it is inconvenient to use a statement
(e.g. in a lambda).
:param Exception exception: An exception to raise.
:raises: Whatever exception is
"""
raise exception

View File

@@ -1,33 +0,0 @@
# Copyright (c) 2009-2012 testtools developers. See LICENSE for details.
from unittest import TestSuite
def test_suite():
from testtools.tests.matchers import (
test_basic,
test_const,
test_datastructures,
test_dict,
test_doctest,
test_exception,
test_filesystem,
test_higherorder,
test_impl,
test_warnings
)
modules = [
test_basic,
test_const,
test_datastructures,
test_dict,
test_doctest,
test_exception,
test_filesystem,
test_higherorder,
test_impl,
test_warnings
]
suites = map(lambda x: x.test_suite(), modules)
return TestSuite(suites)

View File

@@ -1,42 +0,0 @@
# Copyright (c) 2008-2012 testtools developers. See LICENSE for details.
from testtools.tests.helpers import FullStackRunTest
class TestMatchersInterface(object):
run_tests_with = FullStackRunTest
def test_matches_match(self):
matcher = self.matches_matcher
matches = self.matches_matches
mismatches = self.matches_mismatches
for candidate in matches:
self.assertEqual(None, matcher.match(candidate))
for candidate in mismatches:
mismatch = matcher.match(candidate)
self.assertNotEqual(None, mismatch)
self.assertNotEqual(None, getattr(mismatch, 'describe', None))
def test__str__(self):
# [(expected, object to __str__)].
from testtools.matchers._doctest import DocTestMatches
examples = self.str_examples
for expected, matcher in examples:
self.assertThat(matcher, DocTestMatches(expected))
def test_describe_difference(self):
# [(expected, matchee, matcher), ...]
examples = self.describe_examples
for difference, matchee, matcher in examples:
mismatch = matcher.match(matchee)
self.assertEqual(difference, mismatch.describe())
def test_mismatch_details(self):
# The mismatch object must provide get_details, which must return a
# dictionary mapping names to Content objects.
examples = self.describe_examples
for difference, matchee, matcher in examples:
mismatch = matcher.match(matchee)
details = mismatch.get_details()
self.assertEqual(dict(details), details)

View File

@@ -1,423 +0,0 @@
# Copyright (c) 2008-2012 testtools developers. See LICENSE for details.
import re
from testtools import TestCase
from testtools.compat import (
text_repr,
_b,
_u,
)
from testtools.matchers._basic import (
_BinaryMismatch,
Contains,
DoesNotEndWith,
DoesNotStartWith,
EndsWith,
Equals,
Is,
IsInstance,
LessThan,
GreaterThan,
HasLength,
MatchesRegex,
NotEquals,
SameMembers,
StartsWith,
)
from testtools.tests.helpers import FullStackRunTest
from testtools.tests.matchers.helpers import TestMatchersInterface
class Test_BinaryMismatch(TestCase):
"""Mismatches from binary comparisons need useful describe output"""
_long_string = "This is a longish multiline non-ascii string\n\xa7"
_long_b = _b(_long_string)
_long_u = _u(_long_string)
class CustomRepr(object):
def __init__(self, repr_string):
self._repr_string = repr_string
def __repr__(self):
return _u('<object ') + _u(self._repr_string) + _u('>')
def test_short_objects(self):
o1, o2 = self.CustomRepr('a'), self.CustomRepr('b')
mismatch = _BinaryMismatch(o1, "!~", o2)
self.assertEqual(mismatch.describe(), "%r !~ %r" % (o1, o2))
def test_short_mixed_strings(self):
b, u = _b("\xa7"), _u("\xa7")
mismatch = _BinaryMismatch(b, "!~", u)
self.assertEqual(mismatch.describe(), "%r !~ %r" % (b, u))
def test_long_bytes(self):
one_line_b = self._long_b.replace(_b("\n"), _b(" "))
mismatch = _BinaryMismatch(one_line_b, "!~", self._long_b)
self.assertEqual(
mismatch.describe(),
"%s:\nreference = %s\nactual = %s\n" % (
"!~",
text_repr(self._long_b, multiline=True),
text_repr(one_line_b),
)
)
def test_long_unicode(self):
one_line_u = self._long_u.replace("\n", " ")
mismatch = _BinaryMismatch(one_line_u, "!~", self._long_u)
self.assertEqual(
mismatch.describe(),
"%s:\nreference = %s\nactual = %s\n" % (
"!~",
text_repr(self._long_u, multiline=True),
text_repr(one_line_u),
)
)
def test_long_mixed_strings(self):
mismatch = _BinaryMismatch(self._long_b, "!~", self._long_u)
self.assertEqual(
mismatch.describe(),
"%s:\nreference = %s\nactual = %s\n" % (
"!~",
text_repr(self._long_u, multiline=True),
text_repr(self._long_b, multiline=True),
)
)
def test_long_bytes_and_object(self):
obj = object()
mismatch = _BinaryMismatch(self._long_b, "!~", obj)
self.assertEqual(
mismatch.describe(),
"%s:\nreference = %s\nactual = %s\n" % (
"!~",
repr(obj),
text_repr(self._long_b, multiline=True),
)
)
def test_long_unicode_and_object(self):
obj = object()
mismatch = _BinaryMismatch(self._long_u, "!~", obj)
self.assertEqual(
mismatch.describe(),
"%s:\nreference = %s\nactual = %s\n" % (
"!~",
repr(obj),
text_repr(self._long_u, multiline=True),
)
)
class TestEqualsInterface(TestCase, TestMatchersInterface):
matches_matcher = Equals(1)
matches_matches = [1]
matches_mismatches = [2]
str_examples = [("Equals(1)", Equals(1)), ("Equals('1')", Equals('1'))]
describe_examples = [
("2 != 1", 2, Equals(1)),
(("!=:\n"
"reference = 'abcdefghijklmnopqrstuvwxyz0123456789'\n"
"actual = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'\n"),
'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789',
Equals('abcdefghijklmnopqrstuvwxyz0123456789')),
]
class TestNotEqualsInterface(TestCase, TestMatchersInterface):
matches_matcher = NotEquals(1)
matches_matches = [2]
matches_mismatches = [1]
str_examples = [
("NotEquals(1)", NotEquals(1)), ("NotEquals('1')", NotEquals('1'))]
describe_examples = [("1 == 1", 1, NotEquals(1))]
class TestIsInterface(TestCase, TestMatchersInterface):
foo = object()
bar = object()
matches_matcher = Is(foo)
matches_matches = [foo]
matches_mismatches = [bar, 1]
str_examples = [("Is(2)", Is(2))]
describe_examples = [("2 is not 1", 2, Is(1))]
class TestIsInstanceInterface(TestCase, TestMatchersInterface):
class Foo:pass
matches_matcher = IsInstance(Foo)
matches_matches = [Foo()]
matches_mismatches = [object(), 1, Foo]
str_examples = [
("IsInstance(str)", IsInstance(str)),
("IsInstance(str, int)", IsInstance(str, int)),
]
describe_examples = [
("'foo' is not an instance of int", 'foo', IsInstance(int)),
("'foo' is not an instance of any of (int, type)", 'foo',
IsInstance(int, type)),
]
class TestLessThanInterface(TestCase, TestMatchersInterface):
matches_matcher = LessThan(4)
matches_matches = [-5, 3]
matches_mismatches = [4, 5, 5000]
str_examples = [
("LessThan(12)", LessThan(12)),
]
describe_examples = [
('5 >= 4', 5, LessThan(4)),
('4 >= 4', 4, LessThan(4)),
]
class TestGreaterThanInterface(TestCase, TestMatchersInterface):
matches_matcher = GreaterThan(4)
matches_matches = [5, 8]
matches_mismatches = [-2, 0, 4]
str_examples = [
("GreaterThan(12)", GreaterThan(12)),
]
describe_examples = [
('4 <= 5', 4, GreaterThan(5)),
('4 <= 4', 4, GreaterThan(4)),
]
class TestContainsInterface(TestCase, TestMatchersInterface):
matches_matcher = Contains('foo')
matches_matches = ['foo', 'afoo', 'fooa']
matches_mismatches = ['f', 'fo', 'oo', 'faoo', 'foao']
str_examples = [
("Contains(1)", Contains(1)),
("Contains('foo')", Contains('foo')),
]
describe_examples = [("1 not in 2", 2, Contains(1))]
class DoesNotStartWithTests(TestCase):
run_tests_with = FullStackRunTest
def test_describe(self):
mismatch = DoesNotStartWith("fo", "bo")
self.assertEqual("'fo' does not start with 'bo'.", mismatch.describe())
def test_describe_non_ascii_unicode(self):
string = _u("A\xA7")
suffix = _u("B\xA7")
mismatch = DoesNotStartWith(string, suffix)
self.assertEqual("%s does not start with %s." % (
text_repr(string), text_repr(suffix)),
mismatch.describe())
def test_describe_non_ascii_bytes(self):
string = _b("A\xA7")
suffix = _b("B\xA7")
mismatch = DoesNotStartWith(string, suffix)
self.assertEqual("%r does not start with %r." % (string, suffix),
mismatch.describe())
class StartsWithTests(TestCase):
run_tests_with = FullStackRunTest
def test_str(self):
matcher = StartsWith("bar")
self.assertEqual("StartsWith('bar')", str(matcher))
def test_str_with_bytes(self):
b = _b("\xA7")
matcher = StartsWith(b)
self.assertEqual("StartsWith(%r)" % (b,), str(matcher))
def test_str_with_unicode(self):
u = _u("\xA7")
matcher = StartsWith(u)
self.assertEqual("StartsWith(%r)" % (u,), str(matcher))
def test_match(self):
matcher = StartsWith("bar")
self.assertIs(None, matcher.match("barf"))
def test_mismatch_returns_does_not_start_with(self):
matcher = StartsWith("bar")
self.assertIsInstance(matcher.match("foo"), DoesNotStartWith)
def test_mismatch_sets_matchee(self):
matcher = StartsWith("bar")
mismatch = matcher.match("foo")
self.assertEqual("foo", mismatch.matchee)
def test_mismatch_sets_expected(self):
matcher = StartsWith("bar")
mismatch = matcher.match("foo")
self.assertEqual("bar", mismatch.expected)
class DoesNotEndWithTests(TestCase):
run_tests_with = FullStackRunTest
def test_describe(self):
mismatch = DoesNotEndWith("fo", "bo")
self.assertEqual("'fo' does not end with 'bo'.", mismatch.describe())
def test_describe_non_ascii_unicode(self):
string = _u("A\xA7")
suffix = _u("B\xA7")
mismatch = DoesNotEndWith(string, suffix)
self.assertEqual("%s does not end with %s." % (
text_repr(string), text_repr(suffix)),
mismatch.describe())
def test_describe_non_ascii_bytes(self):
string = _b("A\xA7")
suffix = _b("B\xA7")
mismatch = DoesNotEndWith(string, suffix)
self.assertEqual("%r does not end with %r." % (string, suffix),
mismatch.describe())
class EndsWithTests(TestCase):
run_tests_with = FullStackRunTest
def test_str(self):
matcher = EndsWith("bar")
self.assertEqual("EndsWith('bar')", str(matcher))
def test_str_with_bytes(self):
b = _b("\xA7")
matcher = EndsWith(b)
self.assertEqual("EndsWith(%r)" % (b,), str(matcher))
def test_str_with_unicode(self):
u = _u("\xA7")
matcher = EndsWith(u)
self.assertEqual("EndsWith(%r)" % (u,), str(matcher))
def test_match(self):
matcher = EndsWith("arf")
self.assertIs(None, matcher.match("barf"))
def test_mismatch_returns_does_not_end_with(self):
matcher = EndsWith("bar")
self.assertIsInstance(matcher.match("foo"), DoesNotEndWith)
def test_mismatch_sets_matchee(self):
matcher = EndsWith("bar")
mismatch = matcher.match("foo")
self.assertEqual("foo", mismatch.matchee)
def test_mismatch_sets_expected(self):
matcher = EndsWith("bar")
mismatch = matcher.match("foo")
self.assertEqual("bar", mismatch.expected)
class TestSameMembers(TestCase, TestMatchersInterface):
matches_matcher = SameMembers([1, 1, 2, 3, {'foo': 'bar'}])
matches_matches = [
[1, 1, 2, 3, {'foo': 'bar'}],
[3, {'foo': 'bar'}, 1, 2, 1],
[3, 2, 1, {'foo': 'bar'}, 1],
(2, {'foo': 'bar'}, 3, 1, 1),
]
matches_mismatches = [
set([1, 2, 3]),
[1, 1, 2, 3, 5],
[1, 2, 3, {'foo': 'bar'}],
'foo',
]
describe_examples = [
(("elements differ:\n"
"reference = ['apple', 'orange', 'canteloupe', 'watermelon', 'lemon', 'banana']\n"
"actual = ['orange', 'apple', 'banana', 'sparrow', 'lemon', 'canteloupe']\n"
": \n"
"missing: ['watermelon']\n"
"extra: ['sparrow']"
),
['orange', 'apple', 'banana', 'sparrow', 'lemon', 'canteloupe',],
SameMembers(
['apple', 'orange', 'canteloupe', 'watermelon',
'lemon', 'banana',])),
]
str_examples = [
('SameMembers([1, 2, 3])', SameMembers([1, 2, 3])),
]
class TestMatchesRegex(TestCase, TestMatchersInterface):
matches_matcher = MatchesRegex('a|b')
matches_matches = ['a', 'b']
matches_mismatches = ['c']
str_examples = [
("MatchesRegex('a|b')", MatchesRegex('a|b')),
("MatchesRegex('a|b', re.M)", MatchesRegex('a|b', re.M)),
("MatchesRegex('a|b', re.I|re.M)", MatchesRegex('a|b', re.I|re.M)),
("MatchesRegex(%r)" % (_b("\xA7"),), MatchesRegex(_b("\xA7"))),
("MatchesRegex(%r)" % (_u("\xA7"),), MatchesRegex(_u("\xA7"))),
]
describe_examples = [
("'c' does not match /a|b/", 'c', MatchesRegex('a|b')),
("'c' does not match /a\\d/", 'c', MatchesRegex(r'a\d')),
("%r does not match /\\s+\\xa7/" % (_b('c'),),
_b('c'), MatchesRegex(_b("\\s+\xA7"))),
("%r does not match /\\s+\\xa7/" % (_u('c'),),
_u('c'), MatchesRegex(_u("\\s+\xA7"))),
]
class TestHasLength(TestCase, TestMatchersInterface):
matches_matcher = HasLength(2)
matches_matches = [[1, 2]]
matches_mismatches = [[], [1], [3, 2, 1]]
str_examples = [
("HasLength(2)", HasLength(2)),
]
describe_examples = [
("len([]) != 1", [], HasLength(1)),
]
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)

View File

@@ -1,31 +0,0 @@
# Copyright (c) 2016 testtools developers. See LICENSE for details.
from testtools import TestCase
from testtools.compat import _u
from testtools.matchers import Always, Never
from testtools.tests.matchers.helpers import TestMatchersInterface
class TestAlwaysInterface(TestMatchersInterface, TestCase):
""":py:func:`~testtools.matchers.Always` always matches."""
matches_matcher = Always()
matches_matches = [42, object(), 'hi mom']
matches_mismatches = []
str_examples = [('Always()', Always())]
describe_examples = []
class TestNeverInterface(TestMatchersInterface, TestCase):
""":py:func:`~testtools.matchers.Never` never matches."""
matches_matcher = Never()
matches_matches = []
matches_mismatches = [42, object(), 'hi mom']
str_examples = [('Never()', Never())]
describe_examples = [(_u('Inevitable mismatch on 42'), 42, Never())]
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)

View File

@@ -1,211 +0,0 @@
# Copyright (c) 2008-2012 testtools developers. See LICENSE for details.
import doctest
import re
import sys
from testtools import TestCase
from testtools.compat import StringIO
from testtools.matchers import (
Annotate,
Equals,
LessThan,
MatchesRegex,
NotEquals,
)
from testtools.matchers._datastructures import (
ContainsAll,
MatchesListwise,
MatchesStructure,
MatchesSetwise,
)
from testtools.tests.helpers import FullStackRunTest
from testtools.tests.matchers.helpers import TestMatchersInterface
def run_doctest(obj, name):
p = doctest.DocTestParser()
t = p.get_doctest(
obj.__doc__, sys.modules[obj.__module__].__dict__, name, '', 0)
r = doctest.DocTestRunner()
output = StringIO()
r.run(t, out=output.write)
return r.failures, output.getvalue()
class TestMatchesListwise(TestCase):
run_tests_with = FullStackRunTest
# XXX: Add interface tests.
def test_docstring(self):
failure_count, output = run_doctest(
MatchesListwise, "MatchesListwise")
if failure_count:
self.fail("Doctest failed with %s" % output)
class TestMatchesStructure(TestCase, TestMatchersInterface):
class SimpleClass:
def __init__(self, x, y):
self.x = x
self.y = y
matches_matcher = MatchesStructure(x=Equals(1), y=Equals(2))
matches_matches = [SimpleClass(1, 2)]
matches_mismatches = [
SimpleClass(2, 2),
SimpleClass(1, 1),
SimpleClass(3, 3),
]
str_examples = [
("MatchesStructure(x=Equals(1))", MatchesStructure(x=Equals(1))),
("MatchesStructure(y=Equals(2))", MatchesStructure(y=Equals(2))),
("MatchesStructure(x=Equals(1), y=Equals(2))",
MatchesStructure(x=Equals(1), y=Equals(2))),
]
describe_examples = [
("""\
Differences: [
1 != 3: x
]""", SimpleClass(1, 2), MatchesStructure(x=Equals(3), y=Equals(2))),
("""\
Differences: [
2 != 3: y
]""", SimpleClass(1, 2), MatchesStructure(x=Equals(1), y=Equals(3))),
("""\
Differences: [
1 != 0: x
2 != 0: y
]""", SimpleClass(1, 2), MatchesStructure(x=Equals(0), y=Equals(0))),
]
def test_fromExample(self):
self.assertThat(
self.SimpleClass(1, 2),
MatchesStructure.fromExample(self.SimpleClass(1, 3), 'x'))
def test_byEquality(self):
self.assertThat(
self.SimpleClass(1, 2),
MatchesStructure.byEquality(x=1))
def test_withStructure(self):
self.assertThat(
self.SimpleClass(1, 2),
MatchesStructure.byMatcher(LessThan, x=2))
def test_update(self):
self.assertThat(
self.SimpleClass(1, 2),
MatchesStructure(x=NotEquals(1)).update(x=Equals(1)))
def test_update_none(self):
self.assertThat(
self.SimpleClass(1, 2),
MatchesStructure(x=Equals(1), z=NotEquals(42)).update(
z=None))
class TestMatchesSetwise(TestCase):
run_tests_with = FullStackRunTest
def assertMismatchWithDescriptionMatching(self, value, matcher,
description_matcher):
mismatch = matcher.match(value)
if mismatch is None:
self.fail("%s matched %s" % (matcher, value))
actual_description = mismatch.describe()
self.assertThat(
actual_description,
Annotate(
"%s matching %s" % (matcher, value),
description_matcher))
def test_matches(self):
self.assertIs(
None, MatchesSetwise(Equals(1), Equals(2)).match([2, 1]))
def test_mismatches(self):
self.assertMismatchWithDescriptionMatching(
[2, 3], MatchesSetwise(Equals(1), Equals(2)),
MatchesRegex('.*There was 1 mismatch$', re.S))
def test_too_many_matchers(self):
self.assertMismatchWithDescriptionMatching(
[2, 3], MatchesSetwise(Equals(1), Equals(2), Equals(3)),
Equals('There was 1 matcher left over: Equals(1)'))
def test_too_many_values(self):
self.assertMismatchWithDescriptionMatching(
[1, 2, 3], MatchesSetwise(Equals(1), Equals(2)),
Equals('There was 1 value left over: [3]'))
def test_two_too_many_matchers(self):
self.assertMismatchWithDescriptionMatching(
[3], MatchesSetwise(Equals(1), Equals(2), Equals(3)),
MatchesRegex(
r'There were 2 matchers left over: Equals\([12]\), '
r'Equals\([12]\)'))
def test_two_too_many_values(self):
self.assertMismatchWithDescriptionMatching(
[1, 2, 3, 4], MatchesSetwise(Equals(1), Equals(2)),
MatchesRegex(
r'There were 2 values left over: \[[34], [34]\]'))
def test_mismatch_and_too_many_matchers(self):
self.assertMismatchWithDescriptionMatching(
[2, 3], MatchesSetwise(Equals(0), Equals(1), Equals(2)),
MatchesRegex(
r'.*There was 1 mismatch and 1 extra matcher: Equals\([01]\)',
re.S))
def test_mismatch_and_too_many_values(self):
self.assertMismatchWithDescriptionMatching(
[2, 3, 4], MatchesSetwise(Equals(1), Equals(2)),
MatchesRegex(
r'.*There was 1 mismatch and 1 extra value: \[[34]\]',
re.S))
def test_mismatch_and_two_too_many_matchers(self):
self.assertMismatchWithDescriptionMatching(
[3, 4], MatchesSetwise(
Equals(0), Equals(1), Equals(2), Equals(3)),
MatchesRegex(
'.*There was 1 mismatch and 2 extra matchers: '
r'Equals\([012]\), Equals\([012]\)', re.S))
def test_mismatch_and_two_too_many_values(self):
self.assertMismatchWithDescriptionMatching(
[2, 3, 4, 5], MatchesSetwise(Equals(1), Equals(2)),
MatchesRegex(
r'.*There was 1 mismatch and 2 extra values: \[[145], [145]\]',
re.S))
class TestContainsAllInterface(TestCase, TestMatchersInterface):
matches_matcher = ContainsAll(['foo', 'bar'])
matches_matches = [['foo', 'bar'], ['foo', 'z', 'bar'], ['bar', 'foo']]
matches_mismatches = [['f', 'g'], ['foo', 'baz'], []]
str_examples = [(
"MatchesAll(Contains('foo'), Contains('bar'))",
ContainsAll(['foo', 'bar'])),
]
describe_examples = [("""Differences: [
'baz' not in 'foo'
]""",
'foo', ContainsAll(['foo', 'baz']))]
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)

View File

@@ -1,251 +0,0 @@
from testtools import TestCase
from testtools.matchers import (
Equals,
NotEquals,
Not,
)
from testtools.matchers._dict import (
ContainedByDict,
ContainsDict,
KeysEqual,
MatchesAllDict,
MatchesDict,
_SubDictOf,
)
from testtools.tests.matchers.helpers import TestMatchersInterface
class TestMatchesAllDictInterface(TestCase, TestMatchersInterface):
matches_matcher = MatchesAllDict({'a': NotEquals(1), 'b': NotEquals(2)})
matches_matches = [3, 4]
matches_mismatches = [1, 2]
str_examples = [
("MatchesAllDict({'a': NotEquals(1), 'b': NotEquals(2)})",
matches_matcher)]
describe_examples = [
("""a: 1 == 1""", 1, matches_matcher),
]
class TestKeysEqualEmpty(TestCase, TestMatchersInterface):
matches_matcher = KeysEqual()
matches_matches = [
{},
]
matches_mismatches = [
{'foo': 0, 'bar': 1},
{'foo': 0},
{'bar': 1},
{'foo': 0, 'bar': 1, 'baz': 2},
{'a': None, 'b': None, 'c': None},
]
str_examples = [
("KeysEqual()", KeysEqual()),
]
describe_examples = [
("[] does not match {1: 2}: Keys not equal",
{1: 2}, matches_matcher),
]
class TestKeysEqualWithList(TestCase, TestMatchersInterface):
matches_matcher = KeysEqual('foo', 'bar')
matches_matches = [
{'foo': 0, 'bar': 1},
]
matches_mismatches = [
{},
{'foo': 0},
{'bar': 1},
{'foo': 0, 'bar': 1, 'baz': 2},
{'a': None, 'b': None, 'c': None},
]
str_examples = [
("KeysEqual('foo', 'bar')", KeysEqual('foo', 'bar')),
]
describe_examples = []
def test_description(self):
matchee = {'foo': 0, 'bar': 1, 'baz': 2}
mismatch = KeysEqual('foo', 'bar').match(matchee)
description = mismatch.describe()
self.assertThat(
description, Equals(
"['bar', 'foo'] does not match %r: Keys not equal"
% (matchee,)))
class TestKeysEqualWithDict(TestKeysEqualWithList):
matches_matcher = KeysEqual({'foo': 3, 'bar': 4})
class TestSubDictOf(TestCase, TestMatchersInterface):
matches_matcher = _SubDictOf({'foo': 'bar', 'baz': 'qux'})
matches_matches = [
{'foo': 'bar', 'baz': 'qux'},
{'foo': 'bar'},
]
matches_mismatches = [
{'foo': 'bar', 'baz': 'qux', 'cat': 'dog'},
{'foo': 'bar', 'cat': 'dog'},
]
str_examples = []
describe_examples = []
class TestMatchesDict(TestCase, TestMatchersInterface):
matches_matcher = MatchesDict(
{'foo': Equals('bar'), 'baz': Not(Equals('qux'))})
matches_matches = [
{'foo': 'bar', 'baz': None},
{'foo': 'bar', 'baz': 'quux'},
]
matches_mismatches = [
{},
{'foo': 'bar', 'baz': 'qux'},
{'foo': 'bop', 'baz': 'qux'},
{'foo': 'bar', 'baz': 'quux', 'cat': 'dog'},
{'foo': 'bar', 'cat': 'dog'},
]
str_examples = [
("MatchesDict({'baz': %s, 'foo': %s})" % (
Not(Equals('qux')), Equals('bar')),
matches_matcher),
]
describe_examples = [
("Missing: {\n"
" 'baz': Not(Equals('qux')),\n"
" 'foo': Equals('bar'),\n"
"}",
{}, matches_matcher),
("Differences: {\n"
" 'baz': 'qux' matches Equals('qux'),\n"
"}",
{'foo': 'bar', 'baz': 'qux'}, matches_matcher),
("Differences: {\n"
" 'baz': 'qux' matches Equals('qux'),\n"
" 'foo': 'bop' != 'bar',\n"
"}",
{'foo': 'bop', 'baz': 'qux'}, matches_matcher),
("Extra: {\n"
" 'cat': 'dog',\n"
"}",
{'foo': 'bar', 'baz': 'quux', 'cat': 'dog'}, matches_matcher),
("Extra: {\n"
" 'cat': 'dog',\n"
"}\n"
"Missing: {\n"
" 'baz': Not(Equals('qux')),\n"
"}",
{'foo': 'bar', 'cat': 'dog'}, matches_matcher),
]
class TestContainsDict(TestCase, TestMatchersInterface):
matches_matcher = ContainsDict(
{'foo': Equals('bar'), 'baz': Not(Equals('qux'))})
matches_matches = [
{'foo': 'bar', 'baz': None},
{'foo': 'bar', 'baz': 'quux'},
{'foo': 'bar', 'baz': 'quux', 'cat': 'dog'},
]
matches_mismatches = [
{},
{'foo': 'bar', 'baz': 'qux'},
{'foo': 'bop', 'baz': 'qux'},
{'foo': 'bar', 'cat': 'dog'},
{'foo': 'bar'},
]
str_examples = [
("ContainsDict({'baz': %s, 'foo': %s})" % (
Not(Equals('qux')), Equals('bar')),
matches_matcher),
]
describe_examples = [
("Missing: {\n"
" 'baz': Not(Equals('qux')),\n"
" 'foo': Equals('bar'),\n"
"}",
{}, matches_matcher),
("Differences: {\n"
" 'baz': 'qux' matches Equals('qux'),\n"
"}",
{'foo': 'bar', 'baz': 'qux'}, matches_matcher),
("Differences: {\n"
" 'baz': 'qux' matches Equals('qux'),\n"
" 'foo': 'bop' != 'bar',\n"
"}",
{'foo': 'bop', 'baz': 'qux'}, matches_matcher),
("Missing: {\n"
" 'baz': Not(Equals('qux')),\n"
"}",
{'foo': 'bar', 'cat': 'dog'}, matches_matcher),
]
class TestContainedByDict(TestCase, TestMatchersInterface):
matches_matcher = ContainedByDict(
{'foo': Equals('bar'), 'baz': Not(Equals('qux'))})
matches_matches = [
{},
{'foo': 'bar'},
{'foo': 'bar', 'baz': 'quux'},
{'baz': 'quux'},
]
matches_mismatches = [
{'foo': 'bar', 'baz': 'quux', 'cat': 'dog'},
{'foo': 'bar', 'baz': 'qux'},
{'foo': 'bop', 'baz': 'qux'},
{'foo': 'bar', 'cat': 'dog'},
]
str_examples = [
("ContainedByDict({'baz': %s, 'foo': %s})" % (
Not(Equals('qux')), Equals('bar')),
matches_matcher),
]
describe_examples = [
("Differences: {\n"
" 'baz': 'qux' matches Equals('qux'),\n"
"}",
{'foo': 'bar', 'baz': 'qux'}, matches_matcher),
("Differences: {\n"
" 'baz': 'qux' matches Equals('qux'),\n"
" 'foo': 'bop' != 'bar',\n"
"}",
{'foo': 'bop', 'baz': 'qux'}, matches_matcher),
("Extra: {\n"
" 'cat': 'dog',\n"
"}",
{'foo': 'bar', 'cat': 'dog'}, matches_matcher),
]
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)

View File

@@ -1,82 +0,0 @@
# Copyright (c) 2008-2012 testtools developers. See LICENSE for details.
import doctest
from testtools import TestCase
from testtools.compat import (
str_is_unicode,
_b,
_u,
)
from testtools.matchers._doctest import DocTestMatches
from testtools.tests.helpers import FullStackRunTest
from testtools.tests.matchers.helpers import TestMatchersInterface
class TestDocTestMatchesInterface(TestCase, TestMatchersInterface):
matches_matcher = DocTestMatches("Ran 1 test in ...s", doctest.ELLIPSIS)
matches_matches = ["Ran 1 test in 0.000s", "Ran 1 test in 1.234s"]
matches_mismatches = ["Ran 1 tests in 0.000s", "Ran 2 test in 0.000s"]
str_examples = [("DocTestMatches('Ran 1 test in ...s\\n')",
DocTestMatches("Ran 1 test in ...s")),
("DocTestMatches('foo\\n', flags=8)", DocTestMatches("foo", flags=8)),
]
describe_examples = [('Expected:\n Ran 1 tests in ...s\nGot:\n'
' Ran 1 test in 0.123s\n', "Ran 1 test in 0.123s",
DocTestMatches("Ran 1 tests in ...s", doctest.ELLIPSIS))]
class TestDocTestMatchesInterfaceUnicode(TestCase, TestMatchersInterface):
matches_matcher = DocTestMatches(_u("\xa7..."), doctest.ELLIPSIS)
matches_matches = [_u("\xa7"), _u("\xa7 more\n")]
matches_mismatches = ["\\xa7", _u("more \xa7"), _u("\n\xa7")]
str_examples = [("DocTestMatches(%r)" % (_u("\xa7\n"),),
DocTestMatches(_u("\xa7"))),
]
describe_examples = [(
_u("Expected:\n \xa7\nGot:\n a\n"),
"a",
DocTestMatches(_u("\xa7"), doctest.ELLIPSIS))]
class TestDocTestMatchesSpecific(TestCase):
run_tests_with = FullStackRunTest
def test___init__simple(self):
matcher = DocTestMatches("foo")
self.assertEqual("foo\n", matcher.want)
def test___init__flags(self):
matcher = DocTestMatches("bar\n", doctest.ELLIPSIS)
self.assertEqual("bar\n", matcher.want)
self.assertEqual(doctest.ELLIPSIS, matcher.flags)
def test_describe_non_ascii_bytes(self):
"""Even with bytestrings, the mismatch should be coercible to unicode
DocTestMatches is intended for text, but the Python 2 str type also
permits arbitrary binary inputs. This is a slightly bogus thing to do,
and under Python 3 using bytes objects will reasonably raise an error.
"""
header = _b("\x89PNG\r\n\x1a\n...")
if str_is_unicode:
self.assertRaises(TypeError,
DocTestMatches, header, doctest.ELLIPSIS)
return
matcher = DocTestMatches(header, doctest.ELLIPSIS)
mismatch = matcher.match(_b("GIF89a\1\0\1\0\0\0\0;"))
# Must be treatable as unicode text, the exact output matters less
self.assertTrue(unicode(mismatch.describe()))
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)

View File

@@ -1,187 +0,0 @@
# Copyright (c) 2008-2012 testtools developers. See LICENSE for details.
import sys
from testtools import TestCase
from testtools.matchers import (
AfterPreprocessing,
Equals,
)
from testtools.matchers._exception import (
MatchesException,
Raises,
raises,
)
from testtools.tests.helpers import FullStackRunTest
from testtools.tests.matchers.helpers import TestMatchersInterface
def make_error(type, *args, **kwargs):
try:
raise type(*args, **kwargs)
except type:
return sys.exc_info()
class TestMatchesExceptionInstanceInterface(TestCase, TestMatchersInterface):
matches_matcher = MatchesException(ValueError("foo"))
error_foo = make_error(ValueError, 'foo')
error_bar = make_error(ValueError, 'bar')
error_base_foo = make_error(Exception, 'foo')
matches_matches = [error_foo]
matches_mismatches = [error_bar, error_base_foo]
str_examples = [
("MatchesException(Exception('foo',))",
MatchesException(Exception('foo')))
]
describe_examples = [
("%r is not a %r" % (Exception, ValueError),
error_base_foo,
MatchesException(ValueError("foo"))),
("ValueError('bar',) has different arguments to ValueError('foo',).",
error_bar,
MatchesException(ValueError("foo"))),
]
class TestMatchesExceptionTypeInterface(TestCase, TestMatchersInterface):
matches_matcher = MatchesException(ValueError)
error_foo = make_error(ValueError, 'foo')
error_sub = make_error(UnicodeError, 'bar')
error_base_foo = make_error(Exception, 'foo')
matches_matches = [error_foo, error_sub]
matches_mismatches = [error_base_foo]
str_examples = [
("MatchesException(%r)" % Exception,
MatchesException(Exception))
]
describe_examples = [
("%r is not a %r" % (Exception, ValueError),
error_base_foo,
MatchesException(ValueError)),
]
class TestMatchesExceptionTypeReInterface(TestCase, TestMatchersInterface):
matches_matcher = MatchesException(ValueError, 'fo.')
error_foo = make_error(ValueError, 'foo')
error_sub = make_error(UnicodeError, 'foo')
error_bar = make_error(ValueError, 'bar')
matches_matches = [error_foo, error_sub]
matches_mismatches = [error_bar]
str_examples = [
("MatchesException(%r)" % Exception,
MatchesException(Exception, 'fo.'))
]
describe_examples = [
("'bar' does not match /fo./",
error_bar, MatchesException(ValueError, "fo.")),
]
class TestMatchesExceptionTypeMatcherInterface(TestCase, TestMatchersInterface):
matches_matcher = MatchesException(
ValueError, AfterPreprocessing(str, Equals('foo')))
error_foo = make_error(ValueError, 'foo')
error_sub = make_error(UnicodeError, 'foo')
error_bar = make_error(ValueError, 'bar')
matches_matches = [error_foo, error_sub]
matches_mismatches = [error_bar]
str_examples = [
("MatchesException(%r)" % Exception,
MatchesException(Exception, Equals('foo')))
]
describe_examples = [
("%r != 5" % (error_bar[1],),
error_bar, MatchesException(ValueError, Equals(5))),
]
class TestRaisesInterface(TestCase, TestMatchersInterface):
matches_matcher = Raises()
def boom():
raise Exception('foo')
matches_matches = [boom]
matches_mismatches = [lambda:None]
# Tricky to get function objects to render constantly, and the interfaces
# helper uses assertEqual rather than (for instance) DocTestMatches.
str_examples = []
describe_examples = []
class TestRaisesExceptionMatcherInterface(TestCase, TestMatchersInterface):
matches_matcher = Raises(
exception_matcher=MatchesException(Exception('foo')))
def boom_bar():
raise Exception('bar')
def boom_foo():
raise Exception('foo')
matches_matches = [boom_foo]
matches_mismatches = [lambda:None, boom_bar]
# Tricky to get function objects to render constantly, and the interfaces
# helper uses assertEqual rather than (for instance) DocTestMatches.
str_examples = []
describe_examples = []
class TestRaisesBaseTypes(TestCase):
run_tests_with = FullStackRunTest
def raiser(self):
raise KeyboardInterrupt('foo')
def test_KeyboardInterrupt_matched(self):
# When KeyboardInterrupt is matched, it is swallowed.
matcher = Raises(MatchesException(KeyboardInterrupt))
self.assertThat(self.raiser, matcher)
def test_KeyboardInterrupt_propagates(self):
# The default 'it raised' propagates KeyboardInterrupt.
match_keyb = Raises(MatchesException(KeyboardInterrupt))
def raise_keyb_from_match():
matcher = Raises()
matcher.match(self.raiser)
self.assertThat(raise_keyb_from_match, match_keyb)
def test_KeyboardInterrupt_match_Exception_propagates(self):
# If the raised exception isn't matched, and it is not a subclass of
# Exception, it is propagated.
match_keyb = Raises(MatchesException(KeyboardInterrupt))
def raise_keyb_from_match():
matcher = Raises(MatchesException(Exception))
matcher.match(self.raiser)
self.assertThat(raise_keyb_from_match, match_keyb)
class TestRaisesConvenience(TestCase):
run_tests_with = FullStackRunTest
def test_exc_type(self):
self.assertThat(lambda: 1/0, raises(ZeroDivisionError))
def test_exc_value(self):
e = RuntimeError("You lose!")
def raiser():
raise e
self.assertThat(raiser, raises(e))
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)

View File

@@ -1,243 +0,0 @@
# Copyright (c) 2008-2012 testtools developers. See LICENSE for details.
import os
import shutil
import tarfile
import tempfile
from testtools import TestCase
from testtools.matchers import (
Contains,
DocTestMatches,
Equals,
)
from testtools.matchers._filesystem import (
DirContains,
DirExists,
FileContains,
FileExists,
HasPermissions,
PathExists,
SamePath,
TarballContains,
)
class PathHelpers(object):
def mkdtemp(self):
directory = tempfile.mkdtemp()
self.addCleanup(shutil.rmtree, directory)
return directory
def create_file(self, filename, contents=''):
fp = open(filename, 'w')
try:
fp.write(contents)
finally:
fp.close()
def touch(self, filename):
return self.create_file(filename)
class TestPathExists(TestCase, PathHelpers):
def test_exists(self):
tempdir = self.mkdtemp()
self.assertThat(tempdir, PathExists())
def test_not_exists(self):
doesntexist = os.path.join(self.mkdtemp(), 'doesntexist')
mismatch = PathExists().match(doesntexist)
self.assertThat(
"%s does not exist." % doesntexist, Equals(mismatch.describe()))
class TestDirExists(TestCase, PathHelpers):
def test_exists(self):
tempdir = self.mkdtemp()
self.assertThat(tempdir, DirExists())
def test_not_exists(self):
doesntexist = os.path.join(self.mkdtemp(), 'doesntexist')
mismatch = DirExists().match(doesntexist)
self.assertThat(
PathExists().match(doesntexist).describe(),
Equals(mismatch.describe()))
def test_not_a_directory(self):
filename = os.path.join(self.mkdtemp(), 'foo')
self.touch(filename)
mismatch = DirExists().match(filename)
self.assertThat(
"%s is not a directory." % filename, Equals(mismatch.describe()))
class TestFileExists(TestCase, PathHelpers):
def test_exists(self):
tempdir = self.mkdtemp()
filename = os.path.join(tempdir, 'filename')
self.touch(filename)
self.assertThat(filename, FileExists())
def test_not_exists(self):
doesntexist = os.path.join(self.mkdtemp(), 'doesntexist')
mismatch = FileExists().match(doesntexist)
self.assertThat(
PathExists().match(doesntexist).describe(),
Equals(mismatch.describe()))
def test_not_a_file(self):
tempdir = self.mkdtemp()
mismatch = FileExists().match(tempdir)
self.assertThat(
"%s is not a file." % tempdir, Equals(mismatch.describe()))
class TestDirContains(TestCase, PathHelpers):
def test_empty(self):
tempdir = self.mkdtemp()
self.assertThat(tempdir, DirContains([]))
def test_not_exists(self):
doesntexist = os.path.join(self.mkdtemp(), 'doesntexist')
mismatch = DirContains([]).match(doesntexist)
self.assertThat(
PathExists().match(doesntexist).describe(),
Equals(mismatch.describe()))
def test_contains_files(self):
tempdir = self.mkdtemp()
self.touch(os.path.join(tempdir, 'foo'))
self.touch(os.path.join(tempdir, 'bar'))
self.assertThat(tempdir, DirContains(['bar', 'foo']))
def test_matcher(self):
tempdir = self.mkdtemp()
self.touch(os.path.join(tempdir, 'foo'))
self.touch(os.path.join(tempdir, 'bar'))
self.assertThat(tempdir, DirContains(matcher=Contains('bar')))
def test_neither_specified(self):
self.assertRaises(AssertionError, DirContains)
def test_both_specified(self):
self.assertRaises(
AssertionError, DirContains, filenames=[], matcher=Contains('a'))
def test_does_not_contain_files(self):
tempdir = self.mkdtemp()
self.touch(os.path.join(tempdir, 'foo'))
mismatch = DirContains(['bar', 'foo']).match(tempdir)
self.assertThat(
Equals(['bar', 'foo']).match(['foo']).describe(),
Equals(mismatch.describe()))
class TestFileContains(TestCase, PathHelpers):
def test_not_exists(self):
doesntexist = os.path.join(self.mkdtemp(), 'doesntexist')
mismatch = FileContains('').match(doesntexist)
self.assertThat(
PathExists().match(doesntexist).describe(),
Equals(mismatch.describe()))
def test_contains(self):
tempdir = self.mkdtemp()
filename = os.path.join(tempdir, 'foo')
self.create_file(filename, 'Hello World!')
self.assertThat(filename, FileContains('Hello World!'))
def test_matcher(self):
tempdir = self.mkdtemp()
filename = os.path.join(tempdir, 'foo')
self.create_file(filename, 'Hello World!')
self.assertThat(
filename, FileContains(matcher=DocTestMatches('Hello World!')))
def test_neither_specified(self):
self.assertRaises(AssertionError, FileContains)
def test_both_specified(self):
self.assertRaises(
AssertionError, FileContains, contents=[], matcher=Contains('a'))
def test_does_not_contain(self):
tempdir = self.mkdtemp()
filename = os.path.join(tempdir, 'foo')
self.create_file(filename, 'Goodbye Cruel World!')
mismatch = FileContains('Hello World!').match(filename)
self.assertThat(
Equals('Hello World!').match('Goodbye Cruel World!').describe(),
Equals(mismatch.describe()))
class TestTarballContains(TestCase, PathHelpers):
def test_match(self):
tempdir = self.mkdtemp()
in_temp_dir = lambda x: os.path.join(tempdir, x)
self.touch(in_temp_dir('a'))
self.touch(in_temp_dir('b'))
tarball = tarfile.open(in_temp_dir('foo.tar.gz'), 'w')
tarball.add(in_temp_dir('a'), 'a')
tarball.add(in_temp_dir('b'), 'b')
tarball.close()
self.assertThat(
in_temp_dir('foo.tar.gz'), TarballContains(['b', 'a']))
def test_mismatch(self):
tempdir = self.mkdtemp()
in_temp_dir = lambda x: os.path.join(tempdir, x)
self.touch(in_temp_dir('a'))
self.touch(in_temp_dir('b'))
tarball = tarfile.open(in_temp_dir('foo.tar.gz'), 'w')
tarball.add(in_temp_dir('a'), 'a')
tarball.add(in_temp_dir('b'), 'b')
tarball.close()
mismatch = TarballContains(['d', 'c']).match(in_temp_dir('foo.tar.gz'))
self.assertEqual(
mismatch.describe(),
Equals(['c', 'd']).match(['a', 'b']).describe())
class TestSamePath(TestCase, PathHelpers):
def test_same_string(self):
self.assertThat('foo', SamePath('foo'))
def test_relative_and_absolute(self):
path = 'foo'
abspath = os.path.abspath(path)
self.assertThat(path, SamePath(abspath))
self.assertThat(abspath, SamePath(path))
def test_real_path(self):
tempdir = self.mkdtemp()
source = os.path.join(tempdir, 'source')
self.touch(source)
target = os.path.join(tempdir, 'target')
try:
os.symlink(source, target)
except (AttributeError, NotImplementedError):
self.skip("No symlink support")
self.assertThat(source, SamePath(target))
self.assertThat(target, SamePath(source))
class TestHasPermissions(TestCase, PathHelpers):
def test_match(self):
tempdir = self.mkdtemp()
filename = os.path.join(tempdir, 'filename')
self.touch(filename)
permissions = oct(os.stat(filename).st_mode)[-4:]
self.assertThat(filename, HasPermissions(permissions))
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)

View File

@@ -1,255 +0,0 @@
# Copyright (c) 2008-2011 testtools developers. See LICENSE for details.
from testtools import TestCase
from testtools.matchers import (
DocTestMatches,
Equals,
LessThan,
MatchesStructure,
Mismatch,
NotEquals,
)
from testtools.matchers._higherorder import (
AfterPreprocessing,
AllMatch,
Annotate,
AnnotatedMismatch,
AnyMatch,
MatchesAny,
MatchesAll,
MatchesPredicate,
MatchesPredicateWithParams,
Not,
)
from testtools.tests.helpers import FullStackRunTest
from testtools.tests.matchers.helpers import TestMatchersInterface
class TestAllMatch(TestCase, TestMatchersInterface):
matches_matcher = AllMatch(LessThan(10))
matches_matches = [
[9, 9, 9],
(9, 9),
iter([9, 9, 9, 9, 9]),
]
matches_mismatches = [
[11, 9, 9],
iter([9, 12, 9, 11]),
]
str_examples = [
("AllMatch(LessThan(12))", AllMatch(LessThan(12))),
]
describe_examples = [
('Differences: [\n'
'11 >= 10\n'
'10 >= 10\n'
']',
[11, 9, 10],
AllMatch(LessThan(10))),
]
class TestAnyMatch(TestCase, TestMatchersInterface):
matches_matcher = AnyMatch(Equals('elephant'))
matches_matches = [
['grass', 'cow', 'steak', 'milk', 'elephant'],
(13, 'elephant'),
['elephant', 'elephant', 'elephant'],
set(['hippo', 'rhino', 'elephant']),
]
matches_mismatches = [
[],
['grass', 'cow', 'steak', 'milk'],
(13, 12, 10),
['element', 'hephalump', 'pachyderm'],
set(['hippo', 'rhino', 'diplodocus']),
]
str_examples = [
("AnyMatch(Equals('elephant'))", AnyMatch(Equals('elephant'))),
]
describe_examples = [
('Differences: [\n'
'11 != 7\n'
'9 != 7\n'
'10 != 7\n'
']',
[11, 9, 10],
AnyMatch(Equals(7))),
]
class TestAfterPreprocessing(TestCase, TestMatchersInterface):
def parity(x):
return x % 2
matches_matcher = AfterPreprocessing(parity, Equals(1))
matches_matches = [3, 5]
matches_mismatches = [2]
str_examples = [
("AfterPreprocessing(<function parity>, Equals(1))",
AfterPreprocessing(parity, Equals(1))),
]
describe_examples = [
("0 != 1: after <function parity> on 2", 2,
AfterPreprocessing(parity, Equals(1))),
("0 != 1", 2,
AfterPreprocessing(parity, Equals(1), annotate=False)),
]
class TestMatchersAnyInterface(TestCase, TestMatchersInterface):
matches_matcher = MatchesAny(DocTestMatches("1"), DocTestMatches("2"))
matches_matches = ["1", "2"]
matches_mismatches = ["3"]
str_examples = [(
"MatchesAny(DocTestMatches('1\\n'), DocTestMatches('2\\n'))",
MatchesAny(DocTestMatches("1"), DocTestMatches("2"))),
]
describe_examples = [("""Differences: [
Expected:
1
Got:
3
Expected:
2
Got:
3
]""",
"3", MatchesAny(DocTestMatches("1"), DocTestMatches("2")))]
class TestMatchesAllInterface(TestCase, TestMatchersInterface):
matches_matcher = MatchesAll(NotEquals(1), NotEquals(2))
matches_matches = [3, 4]
matches_mismatches = [1, 2]
str_examples = [
("MatchesAll(NotEquals(1), NotEquals(2))",
MatchesAll(NotEquals(1), NotEquals(2)))]
describe_examples = [
("""Differences: [
1 == 1
]""",
1, MatchesAll(NotEquals(1), NotEquals(2))),
("1 == 1", 1,
MatchesAll(NotEquals(2), NotEquals(1), Equals(3), first_only=True)),
]
class TestAnnotate(TestCase, TestMatchersInterface):
matches_matcher = Annotate("foo", Equals(1))
matches_matches = [1]
matches_mismatches = [2]
str_examples = [
("Annotate('foo', Equals(1))", Annotate("foo", Equals(1)))]
describe_examples = [("2 != 1: foo", 2, Annotate('foo', Equals(1)))]
def test_if_message_no_message(self):
# Annotate.if_message returns the given matcher if there is no
# message.
matcher = Equals(1)
not_annotated = Annotate.if_message('', matcher)
self.assertIs(matcher, not_annotated)
def test_if_message_given_message(self):
# Annotate.if_message returns an annotated version of the matcher if a
# message is provided.
matcher = Equals(1)
expected = Annotate('foo', matcher)
annotated = Annotate.if_message('foo', matcher)
self.assertThat(
annotated,
MatchesStructure.fromExample(expected, 'annotation', 'matcher'))
class TestAnnotatedMismatch(TestCase):
run_tests_with = FullStackRunTest
def test_forwards_details(self):
x = Mismatch('description', {'foo': 'bar'})
annotated = AnnotatedMismatch("annotation", x)
self.assertEqual(x.get_details(), annotated.get_details())
class TestNotInterface(TestCase, TestMatchersInterface):
matches_matcher = Not(Equals(1))
matches_matches = [2]
matches_mismatches = [1]
str_examples = [
("Not(Equals(1))", Not(Equals(1))),
("Not(Equals('1'))", Not(Equals('1')))]
describe_examples = [('1 matches Equals(1)', 1, Not(Equals(1)))]
def is_even(x):
return x % 2 == 0
class TestMatchesPredicate(TestCase, TestMatchersInterface):
matches_matcher = MatchesPredicate(is_even, "%s is not even")
matches_matches = [2, 4, 6, 8]
matches_mismatches = [3, 5, 7, 9]
str_examples = [
("MatchesPredicate(%r, %r)" % (is_even, "%s is not even"),
MatchesPredicate(is_even, "%s is not even")),
]
describe_examples = [
('7 is not even', 7, MatchesPredicate(is_even, "%s is not even")),
]
def between(x, low, high):
return low < x < high
class TestMatchesPredicateWithParams(TestCase, TestMatchersInterface):
matches_matcher = MatchesPredicateWithParams(
between, "{0} is not between {1} and {2}")(1, 9)
matches_matches = [2, 4, 6, 8]
matches_mismatches = [0, 1, 9, 10]
str_examples = [
("MatchesPredicateWithParams(%r, %r)(%s)" % (
between, "{0} is not between {1} and {2}", "1, 2"),
MatchesPredicateWithParams(
between, "{0} is not between {1} and {2}")(1, 2)),
("Between(1, 2)", MatchesPredicateWithParams(
between, "{0} is not between {1} and {2}", "Between")(1, 2)),
]
describe_examples = [
('1 is not between 2 and 3', 1, MatchesPredicateWithParams(
between, "{0} is not between {1} and {2}")(2, 3)),
]
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)

View File

@@ -1,132 +0,0 @@
# Copyright (c) 2008-2012 testtools developers. See LICENSE for details.
"""Tests for matchers."""
from testtools import (
Matcher, # check that Matcher is exposed at the top level for docs.
TestCase,
)
from testtools.compat import (
str_is_unicode,
text_repr,
_u,
)
from testtools.matchers import (
Equals,
MatchesException,
Raises,
)
from testtools.matchers._impl import (
Mismatch,
MismatchDecorator,
MismatchError,
)
from testtools.tests.helpers import FullStackRunTest
# Silence pyflakes.
Matcher
class TestMismatch(TestCase):
run_tests_with = FullStackRunTest
def test_constructor_arguments(self):
mismatch = Mismatch("some description", {'detail': "things"})
self.assertEqual("some description", mismatch.describe())
self.assertEqual({'detail': "things"}, mismatch.get_details())
def test_constructor_no_arguments(self):
mismatch = Mismatch()
self.assertThat(mismatch.describe,
Raises(MatchesException(NotImplementedError)))
self.assertEqual({}, mismatch.get_details())
class TestMismatchError(TestCase):
def test_is_assertion_error(self):
# MismatchError is an AssertionError, so that most of the time, it
# looks like a test failure, rather than an error.
def raise_mismatch_error():
raise MismatchError(2, Equals(3), Equals(3).match(2))
self.assertRaises(AssertionError, raise_mismatch_error)
def test_default_description_is_mismatch(self):
mismatch = Equals(3).match(2)
e = MismatchError(2, Equals(3), mismatch)
self.assertEqual(mismatch.describe(), str(e))
def test_default_description_unicode(self):
matchee = _u('\xa7')
matcher = Equals(_u('a'))
mismatch = matcher.match(matchee)
e = MismatchError(matchee, matcher, mismatch)
self.assertEqual(mismatch.describe(), str(e))
def test_verbose_description(self):
matchee = 2
matcher = Equals(3)
mismatch = matcher.match(2)
e = MismatchError(matchee, matcher, mismatch, True)
expected = (
'Match failed. Matchee: %r\n'
'Matcher: %s\n'
'Difference: %s\n' % (
matchee,
matcher,
matcher.match(matchee).describe(),
))
self.assertEqual(expected, str(e))
def test_verbose_unicode(self):
# When assertThat is given matchees or matchers that contain non-ASCII
# unicode strings, we can still provide a meaningful error.
matchee = _u('\xa7')
matcher = Equals(_u('a'))
mismatch = matcher.match(matchee)
expected = (
'Match failed. Matchee: %s\n'
'Matcher: %s\n'
'Difference: %s\n' % (
text_repr(matchee),
matcher,
mismatch.describe(),
))
e = MismatchError(matchee, matcher, mismatch, True)
if str_is_unicode:
actual = str(e)
else:
actual = unicode(e)
# Using str() should still work, and return ascii only
self.assertEqual(
expected.replace(matchee, matchee.encode("unicode-escape")),
str(e).decode("ascii"))
self.assertEqual(expected, actual)
class TestMismatchDecorator(TestCase):
run_tests_with = FullStackRunTest
def test_forwards_description(self):
x = Mismatch("description", {'foo': 'bar'})
decorated = MismatchDecorator(x)
self.assertEqual(x.describe(), decorated.describe())
def test_forwards_details(self):
x = Mismatch("description", {'foo': 'bar'})
decorated = MismatchDecorator(x)
self.assertEqual(x.get_details(), decorated.get_details())
def test_repr(self):
x = Mismatch("description", {'foo': 'bar'})
decorated = MismatchDecorator(x)
self.assertEqual(
'<testtools.matchers.MismatchDecorator(%r)>' % (x,),
repr(decorated))
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)

View File

@@ -1,203 +0,0 @@
# Copyright (c) 2008-2016 testtools developers. See LICENSE for details.
import warnings
from testtools import TestCase
from testtools.matchers import (
AfterPreprocessing,
Equals,
MatchesStructure,
MatchesListwise,
Contains,
HasLength,
)
from testtools.matchers._warnings import Warnings, IsDeprecated, WarningMessage
from testtools.tests.helpers import FullStackRunTest
from testtools.tests.matchers.helpers import TestMatchersInterface
def make_warning(warning_type, message):
warnings.warn(message, warning_type, 2)
def make_warning_message(message, category, filename=None, lineno=None, line=None):
return warnings.WarningMessage(
message=message,
category=category,
filename=filename,
lineno=lineno,
line=line)
class TestWarningMessageCategoryTypeInterface(TestCase, TestMatchersInterface):
"""
Tests for `testtools.matchers._warnings.WarningMessage`.
In particular matching the ``category_type``.
"""
matches_matcher = WarningMessage(category_type=DeprecationWarning)
warning_foo = make_warning_message('foo', DeprecationWarning)
warning_bar = make_warning_message('bar', SyntaxWarning)
warning_base = make_warning_message('base', Warning)
matches_matches = [warning_foo]
matches_mismatches = [warning_bar, warning_base]
str_examples = []
describe_examples = []
class TestWarningMessageMessageInterface(TestCase, TestMatchersInterface):
"""
Tests for `testtools.matchers._warnings.WarningMessage`.
In particular matching the ``message``.
"""
matches_matcher = WarningMessage(category_type=DeprecationWarning,
message=Equals('foo'))
warning_foo = make_warning_message('foo', DeprecationWarning)
warning_bar = make_warning_message('bar', DeprecationWarning)
matches_matches = [warning_foo]
matches_mismatches = [warning_bar]
str_examples = []
describe_examples = []
class TestWarningMessageFilenameInterface(TestCase, TestMatchersInterface):
"""
Tests for `testtools.matchers._warnings.WarningMessage`.
In particular matching the ``filename``.
"""
matches_matcher = WarningMessage(category_type=DeprecationWarning,
filename=Equals('a'))
warning_foo = make_warning_message('foo', DeprecationWarning, filename='a')
warning_bar = make_warning_message('bar', DeprecationWarning, filename='b')
matches_matches = [warning_foo]
matches_mismatches = [warning_bar]
str_examples = []
describe_examples = []
class TestWarningMessageLineNumberInterface(TestCase, TestMatchersInterface):
"""
Tests for `testtools.matchers._warnings.WarningMessage`.
In particular matching the ``lineno``.
"""
matches_matcher = WarningMessage(category_type=DeprecationWarning,
lineno=Equals(42))
warning_foo = make_warning_message('foo', DeprecationWarning, lineno=42)
warning_bar = make_warning_message('bar', DeprecationWarning, lineno=21)
matches_matches = [warning_foo]
matches_mismatches = [warning_bar]
str_examples = []
describe_examples = []
class TestWarningMessageLineInterface(TestCase, TestMatchersInterface):
"""
Tests for `testtools.matchers._warnings.WarningMessage`.
In particular matching the ``line``.
"""
matches_matcher = WarningMessage(category_type=DeprecationWarning,
line=Equals('x'))
warning_foo = make_warning_message('foo', DeprecationWarning, line='x')
warning_bar = make_warning_message('bar', DeprecationWarning, line='y')
matches_matches = [warning_foo]
matches_mismatches = [warning_bar]
str_examples = []
describe_examples = []
class TestWarningsInterface(TestCase, TestMatchersInterface):
"""
Tests for `testtools.matchers._warnings.Warnings`.
Specifically without the optional argument.
"""
matches_matcher = Warnings()
def old_func():
warnings.warn('old_func is deprecated', DeprecationWarning, 2)
matches_matches = [old_func]
matches_mismatches = [lambda: None]
# Tricky to get function objects to render constantly, and the interfaces
# helper uses assertEqual rather than (for instance) DocTestMatches.
str_examples = []
describe_examples = []
class TestWarningsMatcherInterface(TestCase, TestMatchersInterface):
"""
Tests for `testtools.matchers._warnings.Warnings`.
Specifically with the optional matcher argument.
"""
matches_matcher = Warnings(
warnings_matcher=MatchesListwise([
MatchesStructure(
message=AfterPreprocessing(
str, Contains('old_func')))]))
def old_func():
warnings.warn('old_func is deprecated', DeprecationWarning, 2)
def older_func():
warnings.warn('older_func is deprecated', DeprecationWarning, 2)
matches_matches = [old_func]
matches_mismatches = [lambda:None, older_func]
str_examples = []
describe_examples = []
class TestWarningsMatcherNoWarningsInterface(TestCase, TestMatchersInterface):
"""
Tests for `testtools.matchers._warnings.Warnings`.
Specifically with the optional matcher argument matching that there were no
warnings.
"""
matches_matcher = Warnings(warnings_matcher=HasLength(0))
def nowarning_func():
pass
def warning_func():
warnings.warn('warning_func is deprecated', DeprecationWarning, 2)
matches_matches = [nowarning_func]
matches_mismatches = [warning_func]
str_examples = []
describe_examples = []
class TestWarningMessage(TestCase):
"""
Tests for `testtools.matchers._warnings.WarningMessage`.
"""
run_tests_with = FullStackRunTest
def test_category(self):
def old_func():
warnings.warn('old_func is deprecated', DeprecationWarning, 2)
self.assertThat(old_func, IsDeprecated(Contains('old_func')))
class TestIsDeprecated(TestCase):
"""
Tests for `testtools.matchers._warnings.IsDeprecated`.
"""
run_tests_with = FullStackRunTest
def test_warning(self):
def old_func():
warnings.warn('old_func is deprecated', DeprecationWarning, 2)
self.assertThat(old_func, IsDeprecated(Contains('old_func')))
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)

View File

@@ -1,235 +0,0 @@
# Copyright (c) 2015 testtools developers. See LICENSE for details.
"""A collection of sample TestCases.
These are primarily of use in testing the test framework.
"""
from testscenarios import multiply_scenarios
from testtools import TestCase
from testtools.matchers import (
AfterPreprocessing,
Contains,
Equals,
MatchesDict,
MatchesListwise,
)
def make_test_case(test_method_name, set_up=None, test_body=None,
tear_down=None, cleanups=(), pre_set_up=None,
post_tear_down=None):
"""Make a test case with the given behaviors.
All callables are unary callables that receive this test as their argument.
:param str test_method_name: The name of the test method.
:param callable set_up: Implementation of setUp.
:param callable test_body: Implementation of the actual test. Will be
assigned to the test method.
:param callable tear_down: Implementation of tearDown.
:param cleanups: Iterable of callables that will be added as cleanups.
:param callable pre_set_up: Called before the upcall to setUp().
:param callable post_tear_down: Called after the upcall to tearDown().
:return: A ``testtools.TestCase``.
"""
set_up = set_up if set_up else _do_nothing
test_body = test_body if test_body else _do_nothing
tear_down = tear_down if tear_down else _do_nothing
pre_set_up = pre_set_up if pre_set_up else _do_nothing
post_tear_down = post_tear_down if post_tear_down else _do_nothing
return _ConstructedTest(
test_method_name, set_up, test_body, tear_down, cleanups,
pre_set_up, post_tear_down,
)
class _ConstructedTest(TestCase):
"""A test case defined by arguments, rather than overrides."""
def __init__(self, test_method_name, set_up, test_body, tear_down,
cleanups, pre_set_up, post_tear_down):
"""Construct a test case.
See ``make_test_case`` for full documentation.
"""
setattr(self, test_method_name, self.test_case)
super(_ConstructedTest, self).__init__(test_method_name)
self._set_up = set_up
self._test_body = test_body
self._tear_down = tear_down
self._test_cleanups = cleanups
self._pre_set_up = pre_set_up
self._post_tear_down = post_tear_down
def setUp(self):
self._pre_set_up(self)
super(_ConstructedTest, self).setUp()
for cleanup in self._test_cleanups:
self.addCleanup(cleanup, self)
self._set_up(self)
def test_case(self):
self._test_body(self)
def tearDown(self):
self._tear_down(self)
super(_ConstructedTest, self).tearDown()
self._post_tear_down(self)
def _do_nothing(case):
pass
_success = _do_nothing
def _error(case):
1/0 # arbitrary non-failure exception
def _failure(case):
case.fail('arbitrary failure')
def _skip(case):
case.skip('arbitrary skip message')
def _expected_failure(case):
case.expectFailure('arbitrary expected failure', _failure, case)
def _unexpected_success(case):
case.expectFailure('arbitrary unexpected success', _success, case)
behaviors = [
('success', _success),
('fail', _failure),
('error', _error),
('skip', _skip),
('xfail', _expected_failure),
('uxsuccess', _unexpected_success),
]
def _make_behavior_scenarios(stage):
"""Given a test stage, iterate over behavior scenarios for that stage.
e.g.
>>> list(_make_behavior_scenarios('set_up'))
[('set_up=success', {'set_up_behavior': <function _success>}),
('set_up=fail', {'set_up_behavior': <function _failure>}),
('set_up=error', {'set_up_behavior': <function _error>}),
('set_up=skip', {'set_up_behavior': <function _skip>}),
('set_up=xfail', {'set_up_behavior': <function _expected_failure>),
('set_up=uxsuccess',
{'set_up_behavior': <function _unexpected_success>})]
Ordering is not consistent.
"""
return (
('%s=%s' % (stage, behavior),
{'%s_behavior' % (stage,): function})
for (behavior, function) in behaviors
)
def make_case_for_behavior_scenario(case):
"""Given a test with a behavior scenario installed, make a TestCase."""
cleanup_behavior = getattr(case, 'cleanup_behavior', None)
cleanups = [cleanup_behavior] if cleanup_behavior else []
return make_test_case(
case.getUniqueString(),
set_up=getattr(case, 'set_up_behavior', _do_nothing),
test_body=getattr(case, 'body_behavior', _do_nothing),
tear_down=getattr(case, 'tear_down_behavior', _do_nothing),
cleanups=cleanups,
pre_set_up=getattr(case, 'pre_set_up_behavior', _do_nothing),
post_tear_down=getattr(case, 'post_tear_down_behavior', _do_nothing),
)
class _SetUpFailsOnGlobalState(TestCase):
"""Fail to upcall setUp on first run. Fail to upcall tearDown after.
This simulates a test that fails to upcall in ``setUp`` if some global
state is broken, and fails to call ``tearDown`` when the global state
breaks but works after that.
"""
first_run = True
def setUp(self):
if not self.first_run:
return
super(_SetUpFailsOnGlobalState, self).setUp()
def test_success(self):
pass
def tearDown(self):
if not self.first_run:
super(_SetUpFailsOnGlobalState, self).tearDown()
self.__class__.first_run = False
@classmethod
def make_scenario(cls):
case = cls('test_success')
return {
'case': case,
'expected_first_result': _test_error_traceback(
case, Contains('TestCase.tearDown was not called')),
'expected_second_result': _test_error_traceback(
case, Contains('TestCase.setUp was not called')),
}
def _test_error_traceback(case, traceback_matcher):
"""Match result log of single test that errored out.
``traceback_matcher`` is applied to the text of the traceback.
"""
return MatchesListwise([
Equals(('startTest', case)),
MatchesListwise([
Equals('addError'),
Equals(case),
MatchesDict({
'traceback': AfterPreprocessing(
lambda x: x.as_text(),
traceback_matcher,
)
})
]),
Equals(('stopTest', case)),
])
"""
A list that can be used with testscenarios to test every deterministic sample
case that we have.
"""
deterministic_sample_cases_scenarios = multiply_scenarios(
_make_behavior_scenarios('set_up'),
_make_behavior_scenarios('body'),
_make_behavior_scenarios('tear_down'),
_make_behavior_scenarios('cleanup'),
) + [
('tear_down_fails_after_upcall', {
'post_tear_down_behavior': _error,
}),
]
"""
A list that can be used with testscenarios to test every non-deterministic
sample case that we have.
"""
nondeterministic_sample_cases_scenarios = [
('setup-fails-global-state', _SetUpFailsOnGlobalState.make_scenario()),
]

View File

@@ -1,152 +0,0 @@
from doctest import ELLIPSIS
from testtools import (
TestCase,
)
from testtools.assertions import (
assert_that,
)
from testtools.compat import (
_u,
)
from testtools.content import (
TracebackContent,
)
from testtools.matchers import (
Annotate,
DocTestMatches,
Equals,
)
class AssertThatTests(object):
"""A mixin containing shared tests for assertThat and assert_that."""
def assert_that_callable(self, *args, **kwargs):
raise NotImplementedError
def assertFails(self, message, function, *args, **kwargs):
"""Assert that function raises a failure with the given message."""
failure = self.assertRaises(
self.failureException, function, *args, **kwargs)
self.assert_that_callable(failure, DocTestMatches(message, ELLIPSIS))
def test_assertThat_matches_clean(self):
class Matcher(object):
def match(self, foo):
return None
self.assert_that_callable("foo", Matcher())
def test_assertThat_mismatch_raises_description(self):
calls = []
class Mismatch(object):
def __init__(self, thing):
self.thing = thing
def describe(self):
calls.append(('describe_diff', self.thing))
return "object is not a thing"
def get_details(self):
return {}
class Matcher(object):
def match(self, thing):
calls.append(('match', thing))
return Mismatch(thing)
def __str__(self):
calls.append(('__str__',))
return "a description"
class Test(type(self)):
def test(self):
self.assert_that_callable("foo", Matcher())
result = Test("test").run()
self.assertEqual([
('match', "foo"),
('describe_diff', "foo"),
], calls)
self.assertFalse(result.wasSuccessful())
def test_assertThat_output(self):
matchee = 'foo'
matcher = Equals('bar')
expected = matcher.match(matchee).describe()
self.assertFails(expected, self.assert_that_callable, matchee, matcher)
def test_assertThat_message_is_annotated(self):
matchee = 'foo'
matcher = Equals('bar')
expected = Annotate('woo', matcher).match(matchee).describe()
self.assertFails(expected,
self.assert_that_callable, matchee, matcher, 'woo')
def test_assertThat_verbose_output(self):
matchee = 'foo'
matcher = Equals('bar')
expected = (
'Match failed. Matchee: %r\n'
'Matcher: %s\n'
'Difference: %s\n' % (
matchee,
matcher,
matcher.match(matchee).describe(),
))
self.assertFails(
expected,
self.assert_that_callable, matchee, matcher, verbose=True)
def get_error_string(self, e):
"""Get the string showing how 'e' would be formatted in test output.
This is a little bit hacky, since it's designed to give consistent
output regardless of Python version.
In testtools, TestResult._exc_info_to_unicode is the point of dispatch
between various different implementations of methods that format
exceptions, so that's what we have to call. However, that method cares
about stack traces and formats the exception class. We don't care
about either of these, so we take its output and parse it a little.
"""
error = TracebackContent((e.__class__, e, None), self).as_text()
# We aren't at all interested in the traceback.
if error.startswith('Traceback (most recent call last):\n'):
lines = error.splitlines(True)[1:]
for i, line in enumerate(lines):
if not line.startswith(' '):
break
error = ''.join(lines[i:])
# We aren't interested in how the exception type is formatted.
exc_class, error = error.split(': ', 1)
return error
def test_assertThat_verbose_unicode(self):
# When assertThat is given matchees or matchers that contain non-ASCII
# unicode strings, we can still provide a meaningful error.
matchee = _u('\xa7')
matcher = Equals(_u('a'))
expected = (
'Match failed. Matchee: %s\n'
'Matcher: %s\n'
'Difference: %s\n\n' % (
repr(matchee).replace("\\xa7", matchee),
matcher,
matcher.match(matchee).describe(),
))
e = self.assertRaises(
self.failureException, self.assert_that_callable, matchee, matcher,
verbose=True)
self.assertEqual(expected, self.get_error_string(e))
class TestAssertThatFunction(AssertThatTests, TestCase):
def assert_that_callable(self, *args, **kwargs):
return assert_that(*args, **kwargs)
class TestAssertThatMethod(AssertThatTests, TestCase):
def assert_that_callable(self, *args, **kwargs):
return self.assertThat(*args, **kwargs)
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)

View File

@@ -1,298 +0,0 @@
# Copyright (c) 2010 testtools developers. See LICENSE for details.
"""Tests for miscellaneous compatibility functions"""
import io
import linecache2 as linecache
import os
import sys
import tempfile
import traceback
import testtools
from testtools.compat import (
_b,
_u,
reraise,
str_is_unicode,
text_repr,
unicode_output_stream,
)
from testtools.matchers import (
Equals,
Is,
IsInstance,
MatchesException,
Not,
Raises,
)
class _FakeOutputStream(object):
"""A simple file-like object for testing"""
def __init__(self):
self.writelog = []
def write(self, obj):
self.writelog.append(obj)
class TestUnicodeOutputStream(testtools.TestCase):
"""Test wrapping output streams so they work with arbitrary unicode"""
uni = _u("pa\u026a\u03b8\u0259n")
def setUp(self):
super(TestUnicodeOutputStream, self).setUp()
if sys.platform == "cli":
self.skip("IronPython shouldn't wrap streams to do encoding")
def test_no_encoding_becomes_ascii(self):
"""A stream with no encoding attribute gets ascii/replace strings"""
sout = _FakeOutputStream()
unicode_output_stream(sout).write(self.uni)
self.assertEqual([_b("pa???n")], sout.writelog)
def test_encoding_as_none_becomes_ascii(self):
"""A stream with encoding value of None gets ascii/replace strings"""
sout = _FakeOutputStream()
sout.encoding = None
unicode_output_stream(sout).write(self.uni)
self.assertEqual([_b("pa???n")], sout.writelog)
def test_bogus_encoding_becomes_ascii(self):
"""A stream with a bogus encoding gets ascii/replace strings"""
sout = _FakeOutputStream()
sout.encoding = "bogus"
unicode_output_stream(sout).write(self.uni)
self.assertEqual([_b("pa???n")], sout.writelog)
def test_partial_encoding_replace(self):
"""A string which can be partly encoded correctly should be"""
sout = _FakeOutputStream()
sout.encoding = "iso-8859-7"
unicode_output_stream(sout).write(self.uni)
self.assertEqual([_b("pa?\xe8?n")], sout.writelog)
@testtools.skipIf(str_is_unicode, "Tests behaviour when str is not unicode")
def test_unicode_encodings_wrapped_when_str_is_not_unicode(self):
"""A unicode encoding is wrapped but needs no error handler"""
sout = _FakeOutputStream()
sout.encoding = "utf-8"
uout = unicode_output_stream(sout)
self.assertEqual(uout.errors, "strict")
uout.write(self.uni)
self.assertEqual([_b("pa\xc9\xaa\xce\xb8\xc9\x99n")], sout.writelog)
@testtools.skipIf(not str_is_unicode, "Tests behaviour when str is unicode")
def test_unicode_encodings_not_wrapped_when_str_is_unicode(self):
# No wrapping needed if native str type is unicode
sout = _FakeOutputStream()
sout.encoding = "utf-8"
uout = unicode_output_stream(sout)
self.assertIs(uout, sout)
def test_stringio(self):
"""A StringIO object should maybe get an ascii native str type"""
try:
from cStringIO import StringIO
newio = False
except ImportError:
from io import StringIO
newio = True
sout = StringIO()
soutwrapper = unicode_output_stream(sout)
soutwrapper.write(self.uni)
if newio:
self.assertEqual(self.uni, sout.getvalue())
else:
self.assertEqual("pa???n", sout.getvalue())
def test_io_stringio(self):
# io.StringIO only accepts unicode so should be returned as itself.
s = io.StringIO()
self.assertEqual(s, unicode_output_stream(s))
def test_io_bytesio(self):
# io.BytesIO only accepts bytes so should be wrapped.
bytes_io = io.BytesIO()
self.assertThat(bytes_io, Not(Is(unicode_output_stream(bytes_io))))
# Will error if s was not wrapped properly.
unicode_output_stream(bytes_io).write(_u('foo'))
def test_io_textwrapper(self):
# textwrapper is unicode, should be returned as itself.
text_io = io.TextIOWrapper(io.BytesIO())
self.assertThat(unicode_output_stream(text_io), Is(text_io))
# To be sure...
unicode_output_stream(text_io).write(_u('foo'))
class TestTextRepr(testtools.TestCase):
"""Ensure in extending repr, basic behaviours are not being broken"""
ascii_examples = (
# Single character examples
# C0 control codes should be escaped except multiline \n
("\x00", "'\\x00'", "'''\\\n\\x00'''"),
("\b", "'\\x08'", "'''\\\n\\x08'''"),
("\t", "'\\t'", "'''\\\n\\t'''"),
("\n", "'\\n'", "'''\\\n\n'''"),
("\r", "'\\r'", "'''\\\n\\r'''"),
# Quotes and backslash should match normal repr behaviour
('"', "'\"'", "'''\\\n\"'''"),
("'", "\"'\"", "'''\\\n\\''''"),
("\\", "'\\\\'", "'''\\\n\\\\'''"),
# DEL is also unprintable and should be escaped
("\x7F", "'\\x7f'", "'''\\\n\\x7f'''"),
# Character combinations that need double checking
("\r\n", "'\\r\\n'", "'''\\\n\\r\n'''"),
("\"'", "'\"\\''", "'''\\\n\"\\''''"),
("'\"", "'\\'\"'", "'''\\\n'\"'''"),
("\\n", "'\\\\n'", "'''\\\n\\\\n'''"),
("\\\n", "'\\\\\\n'", "'''\\\n\\\\\n'''"),
("\\' ", "\"\\\\' \"", "'''\\\n\\\\' '''"),
("\\'\n", "\"\\\\'\\n\"", "'''\\\n\\\\'\n'''"),
("\\'\"", "'\\\\\\'\"'", "'''\\\n\\\\'\"'''"),
("\\'''", "\"\\\\'''\"", "'''\\\n\\\\\\'\\'\\''''"),
)
# Bytes with the high bit set should always be escaped
bytes_examples = (
(_b("\x80"), "'\\x80'", "'''\\\n\\x80'''"),
(_b("\xA0"), "'\\xa0'", "'''\\\n\\xa0'''"),
(_b("\xC0"), "'\\xc0'", "'''\\\n\\xc0'''"),
(_b("\xFF"), "'\\xff'", "'''\\\n\\xff'''"),
(_b("\xC2\xA7"), "'\\xc2\\xa7'", "'''\\\n\\xc2\\xa7'''"),
)
# Unicode doesn't escape printable characters as per the Python 3 model
unicode_examples = (
# C1 codes are unprintable
(_u("\x80"), "'\\x80'", "'''\\\n\\x80'''"),
(_u("\x9F"), "'\\x9f'", "'''\\\n\\x9f'''"),
# No-break space is unprintable
(_u("\xA0"), "'\\xa0'", "'''\\\n\\xa0'''"),
# Letters latin alphabets are printable
(_u("\xA1"), _u("'\xa1'"), _u("'''\\\n\xa1'''")),
(_u("\xFF"), _u("'\xff'"), _u("'''\\\n\xff'''")),
(_u("\u0100"), _u("'\u0100'"), _u("'''\\\n\u0100'''")),
# Line and paragraph seperators are unprintable
(_u("\u2028"), "'\\u2028'", "'''\\\n\\u2028'''"),
(_u("\u2029"), "'\\u2029'", "'''\\\n\\u2029'''"),
# Unpaired surrogates are unprintable
(_u("\uD800"), "'\\ud800'", "'''\\\n\\ud800'''"),
(_u("\uDFFF"), "'\\udfff'", "'''\\\n\\udfff'''"),
# Unprintable general categories not fully tested: Cc, Cf, Co, Cn, Zs
)
b_prefix = repr(_b(""))[:-2]
u_prefix = repr(_u(""))[:-2]
def test_ascii_examples_oneline_bytes(self):
for s, expected, _ in self.ascii_examples:
b = _b(s)
actual = text_repr(b, multiline=False)
# Add self.assertIsInstance check?
self.assertEqual(actual, self.b_prefix + expected)
self.assertEqual(eval(actual), b)
def test_ascii_examples_oneline_unicode(self):
for s, expected, _ in self.ascii_examples:
u = _u(s)
actual = text_repr(u, multiline=False)
self.assertEqual(actual, self.u_prefix + expected)
self.assertEqual(eval(actual), u)
def test_ascii_examples_multiline_bytes(self):
for s, _, expected in self.ascii_examples:
b = _b(s)
actual = text_repr(b, multiline=True)
self.assertEqual(actual, self.b_prefix + expected)
self.assertEqual(eval(actual), b)
def test_ascii_examples_multiline_unicode(self):
for s, _, expected in self.ascii_examples:
u = _u(s)
actual = text_repr(u, multiline=True)
self.assertEqual(actual, self.u_prefix + expected)
self.assertEqual(eval(actual), u)
def test_ascii_examples_defaultline_bytes(self):
for s, one, multi in self.ascii_examples:
expected = "\n" in s and multi or one
self.assertEqual(text_repr(_b(s)), self.b_prefix + expected)
def test_ascii_examples_defaultline_unicode(self):
for s, one, multi in self.ascii_examples:
expected = "\n" in s and multi or one
self.assertEqual(text_repr(_u(s)), self.u_prefix + expected)
def test_bytes_examples_oneline(self):
for b, expected, _ in self.bytes_examples:
actual = text_repr(b, multiline=False)
self.assertEqual(actual, self.b_prefix + expected)
self.assertEqual(eval(actual), b)
def test_bytes_examples_multiline(self):
for b, _, expected in self.bytes_examples:
actual = text_repr(b, multiline=True)
self.assertEqual(actual, self.b_prefix + expected)
self.assertEqual(eval(actual), b)
def test_unicode_examples_oneline(self):
for u, expected, _ in self.unicode_examples:
actual = text_repr(u, multiline=False)
self.assertEqual(actual, self.u_prefix + expected)
self.assertEqual(eval(actual), u)
def test_unicode_examples_multiline(self):
for u, _, expected in self.unicode_examples:
actual = text_repr(u, multiline=True)
self.assertEqual(actual, self.u_prefix + expected)
self.assertEqual(eval(actual), u)
class TestReraise(testtools.TestCase):
"""Tests for trivial reraise wrapper needed for Python 2/3 changes"""
def test_exc_info(self):
"""After reraise exc_info matches plus some extra traceback"""
try:
raise ValueError("Bad value")
except ValueError:
_exc_info = sys.exc_info()
try:
reraise(*_exc_info)
except ValueError:
_new_exc_info = sys.exc_info()
self.assertIs(_exc_info[0], _new_exc_info[0])
self.assertIs(_exc_info[1], _new_exc_info[1])
expected_tb = traceback.extract_tb(_exc_info[2])
self.assertEqual(expected_tb,
traceback.extract_tb(_new_exc_info[2])[-len(expected_tb):])
def test_custom_exception_no_args(self):
"""Reraising does not require args attribute to contain params"""
class CustomException(Exception):
"""Exception that expects and sets attrs but not args"""
def __init__(self, value):
Exception.__init__(self)
self.value = value
try:
raise CustomException("Some value")
except CustomException:
_exc_info = sys.exc_info()
self.assertRaises(CustomException, reraise, *_exc_info)
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)

View File

@@ -1,362 +0,0 @@
# Copyright (c) 2008-2012 testtools developers. See LICENSE for details.
import json
import os
import tempfile
import unittest
from testtools import TestCase, skipUnless
from testtools.compat import (
_b,
_u,
BytesIO,
StringIO,
str_is_unicode,
)
from testtools.content import (
attach_file,
Content,
content_from_file,
content_from_stream,
JSON,
json_content,
StackLinesContent,
StacktraceContent,
TracebackContent,
text_content,
)
from testtools.content_type import (
ContentType,
UTF8_TEXT,
)
from testtools.matchers import (
Equals,
MatchesException,
Raises,
raises,
)
from testtools.tests.helpers import an_exc_info
raises_value_error = Raises(MatchesException(ValueError))
class TestContent(TestCase):
def test___init___None_errors(self):
self.assertThat(lambda: Content(None, None), raises_value_error)
self.assertThat(
lambda: Content(None, lambda: ["traceback"]), raises_value_error)
self.assertThat(
lambda: Content(ContentType("text", "traceback"), None),
raises_value_error)
def test___init___sets_ivars(self):
content_type = ContentType("foo", "bar")
content = Content(content_type, lambda: ["bytes"])
self.assertEqual(content_type, content.content_type)
self.assertEqual(["bytes"], list(content.iter_bytes()))
def test___eq__(self):
content_type = ContentType("foo", "bar")
one_chunk = lambda: [_b("bytes")]
two_chunk = lambda: [_b("by"), _b("tes")]
content1 = Content(content_type, one_chunk)
content2 = Content(content_type, one_chunk)
content3 = Content(content_type, two_chunk)
content4 = Content(content_type, lambda: [_b("by"), _b("te")])
content5 = Content(ContentType("f", "b"), two_chunk)
self.assertEqual(content1, content2)
self.assertEqual(content1, content3)
self.assertNotEqual(content1, content4)
self.assertNotEqual(content1, content5)
def test___repr__(self):
content = Content(ContentType("application", "octet-stream"),
lambda: [_b("\x00bin"), _b("ary\xff")])
self.assertIn("\\x00binary\\xff", repr(content))
def test_iter_text_not_text_errors(self):
content_type = ContentType("foo", "bar")
content = Content(content_type, lambda: ["bytes"])
self.assertThat(content.iter_text, raises_value_error)
def test_iter_text_decodes(self):
content_type = ContentType("text", "strange", {"charset": "utf8"})
content = Content(
content_type, lambda: [_u("bytes\xea").encode("utf8")])
self.assertEqual([_u("bytes\xea")], list(content.iter_text()))
def test_iter_text_default_charset_iso_8859_1(self):
content_type = ContentType("text", "strange")
text = _u("bytes\xea")
iso_version = text.encode("ISO-8859-1")
content = Content(content_type, lambda: [iso_version])
self.assertEqual([text], list(content.iter_text()))
def test_as_text(self):
content_type = ContentType("text", "strange", {"charset": "utf8"})
content = Content(
content_type, lambda: [_u("bytes\xea").encode("utf8")])
self.assertEqual(_u("bytes\xea"), content.as_text())
def test_from_file(self):
fd, path = tempfile.mkstemp()
self.addCleanup(os.remove, path)
os.write(fd, _b('some data'))
os.close(fd)
content = content_from_file(path, UTF8_TEXT, chunk_size=2)
self.assertThat(
list(content.iter_bytes()),
Equals([_b('so'), _b('me'), _b(' d'), _b('at'), _b('a')]))
def test_from_nonexistent_file(self):
directory = tempfile.mkdtemp()
nonexistent = os.path.join(directory, 'nonexistent-file')
content = content_from_file(nonexistent)
self.assertThat(content.iter_bytes, raises(IOError))
def test_from_file_default_type(self):
content = content_from_file('/nonexistent/path')
self.assertThat(content.content_type, Equals(UTF8_TEXT))
def test_from_file_eager_loading(self):
fd, path = tempfile.mkstemp()
os.write(fd, _b('some data'))
os.close(fd)
content = content_from_file(path, UTF8_TEXT, buffer_now=True)
os.remove(path)
self.assertThat(
''.join(content.iter_text()), Equals('some data'))
def test_from_file_with_simple_seek(self):
f = tempfile.NamedTemporaryFile()
f.write(_b('some data'))
f.flush()
self.addCleanup(f.close)
content = content_from_file(
f.name, UTF8_TEXT, chunk_size=50, seek_offset=5)
self.assertThat(
list(content.iter_bytes()), Equals([_b('data')]))
def test_from_file_with_whence_seek(self):
f = tempfile.NamedTemporaryFile()
f.write(_b('some data'))
f.flush()
self.addCleanup(f.close)
content = content_from_file(
f.name, UTF8_TEXT, chunk_size=50, seek_offset=-4, seek_whence=2)
self.assertThat(
list(content.iter_bytes()), Equals([_b('data')]))
def test_from_stream(self):
data = StringIO('some data')
content = content_from_stream(data, UTF8_TEXT, chunk_size=2)
self.assertThat(
list(content.iter_bytes()), Equals(['so', 'me', ' d', 'at', 'a']))
def test_from_stream_default_type(self):
data = StringIO('some data')
content = content_from_stream(data)
self.assertThat(content.content_type, Equals(UTF8_TEXT))
def test_from_stream_eager_loading(self):
fd, path = tempfile.mkstemp()
self.addCleanup(os.remove, path)
self.addCleanup(os.close, fd)
os.write(fd, _b('some data'))
stream = open(path, 'rb')
self.addCleanup(stream.close)
content = content_from_stream(stream, UTF8_TEXT, buffer_now=True)
os.write(fd, _b('more data'))
self.assertThat(
''.join(content.iter_text()), Equals('some data'))
def test_from_stream_with_simple_seek(self):
data = BytesIO(_b('some data'))
content = content_from_stream(
data, UTF8_TEXT, chunk_size=50, seek_offset=5)
self.assertThat(
list(content.iter_bytes()), Equals([_b('data')]))
def test_from_stream_with_whence_seek(self):
data = BytesIO(_b('some data'))
content = content_from_stream(
data, UTF8_TEXT, chunk_size=50, seek_offset=-4, seek_whence=2)
self.assertThat(
list(content.iter_bytes()), Equals([_b('data')]))
def test_from_text(self):
data = _u("some data")
expected = Content(UTF8_TEXT, lambda: [data.encode('utf8')])
self.assertEqual(expected, text_content(data))
@skipUnless(str_is_unicode, "Test only applies in python 3.")
def test_text_content_raises_TypeError_when_passed_bytes(self):
data = _b("Some Bytes")
self.assertRaises(TypeError, text_content, data)
def test_text_content_raises_TypeError_when_passed_non_text(self):
bad_values = (None, list(), dict(), 42, 1.23)
for value in bad_values:
self.assertThat(
lambda: text_content(value),
raises(
TypeError("text_content must be given text, not '%s'." %
type(value).__name__)
),
)
def test_json_content(self):
data = {'foo': 'bar'}
expected = Content(JSON, lambda: [_b('{"foo": "bar"}')])
self.assertEqual(expected, json_content(data))
class TestStackLinesContent(TestCase):
def _get_stack_line_and_expected_output(self):
stack_lines = [
('/path/to/file', 42, 'some_function', 'print("Hello World")'),
]
expected = ' File "/path/to/file", line 42, in some_function\n' \
' print("Hello World")\n'
return stack_lines, expected
def test_single_stack_line(self):
stack_lines, expected = self._get_stack_line_and_expected_output()
actual = StackLinesContent(stack_lines).as_text()
self.assertEqual(expected, actual)
def test_prefix_content(self):
stack_lines, expected = self._get_stack_line_and_expected_output()
prefix = self.getUniqueString() + '\n'
content = StackLinesContent(stack_lines, prefix_content=prefix)
actual = content.as_text()
expected = prefix + expected
self.assertEqual(expected, actual)
def test_postfix_content(self):
stack_lines, expected = self._get_stack_line_and_expected_output()
postfix = '\n' + self.getUniqueString()
content = StackLinesContent(stack_lines, postfix_content=postfix)
actual = content.as_text()
expected = expected + postfix
self.assertEqual(expected, actual)
def test___init___sets_content_type(self):
stack_lines, expected = self._get_stack_line_and_expected_output()
content = StackLinesContent(stack_lines)
expected_content_type = ContentType("text", "x-traceback",
{"language": "python", "charset": "utf8"})
self.assertEqual(expected_content_type, content.content_type)
class TestTracebackContent(TestCase):
def test___init___None_errors(self):
self.assertThat(
lambda: TracebackContent(None, None), raises_value_error)
def test___init___sets_ivars(self):
content = TracebackContent(an_exc_info, self)
content_type = ContentType("text", "x-traceback",
{"language": "python", "charset": "utf8"})
self.assertEqual(content_type, content.content_type)
result = unittest.TestResult()
expected = result._exc_info_to_string(an_exc_info, self)
self.assertEqual(expected, ''.join(list(content.iter_text())))
class TestStacktraceContent(TestCase):
def test___init___sets_ivars(self):
content = StacktraceContent()
content_type = ContentType("text", "x-traceback",
{"language": "python", "charset": "utf8"})
self.assertEqual(content_type, content.content_type)
def test_prefix_is_used(self):
prefix = self.getUniqueString()
actual = StacktraceContent(prefix_content=prefix).as_text()
self.assertTrue(actual.startswith(prefix))
def test_postfix_is_used(self):
postfix = self.getUniqueString()
actual = StacktraceContent(postfix_content=postfix).as_text()
self.assertTrue(actual.endswith(postfix))
def test_top_frame_is_skipped_when_no_stack_is_specified(self):
actual = StacktraceContent().as_text()
self.assertTrue('testtools/content.py' not in actual)
class TestAttachFile(TestCase):
def make_file(self, data):
# GZ 2011-04-21: This helper could be useful for methods above trying
# to use mkstemp, but should handle write failures and
# always close the fd. There must be a better way.
fd, path = tempfile.mkstemp()
self.addCleanup(os.remove, path)
os.write(fd, _b(data))
os.close(fd)
return path
def test_simple(self):
class SomeTest(TestCase):
def test_foo(self):
pass
test = SomeTest('test_foo')
data = 'some data'
path = self.make_file(data)
my_content = text_content(data)
attach_file(test, path, name='foo')
self.assertEqual({'foo': my_content}, test.getDetails())
def test_optional_name(self):
# If no name is provided, attach_file just uses the base name of the
# file.
class SomeTest(TestCase):
def test_foo(self):
pass
test = SomeTest('test_foo')
path = self.make_file('some data')
base_path = os.path.basename(path)
attach_file(test, path)
self.assertEqual([base_path], list(test.getDetails()))
def test_lazy_read(self):
class SomeTest(TestCase):
def test_foo(self):
pass
test = SomeTest('test_foo')
path = self.make_file('some data')
attach_file(test, path, name='foo', buffer_now=False)
content = test.getDetails()['foo']
content_file = open(path, 'w')
content_file.write('new data')
content_file.close()
self.assertEqual(''.join(content.iter_text()), 'new data')
def test_eager_read_by_default(self):
class SomeTest(TestCase):
def test_foo(self):
pass
test = SomeTest('test_foo')
path = self.make_file('some data')
attach_file(test, path, name='foo')
content = test.getDetails()['foo']
content_file = open(path, 'w')
content_file.write('new data')
content_file.close()
self.assertEqual(''.join(content.iter_text()), 'some data')
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)

View File

@@ -1,66 +0,0 @@
# Copyright (c) 2008, 2012 testtools developers. See LICENSE for details.
from testtools import TestCase
from testtools.matchers import Equals, MatchesException, Raises
from testtools.content_type import (
ContentType,
JSON,
UTF8_TEXT,
)
class TestContentType(TestCase):
def test___init___None_errors(self):
raises_value_error = Raises(MatchesException(ValueError))
self.assertThat(lambda:ContentType(None, None), raises_value_error)
self.assertThat(lambda:ContentType(None, "traceback"),
raises_value_error)
self.assertThat(lambda:ContentType("text", None), raises_value_error)
def test___init___sets_ivars(self):
content_type = ContentType("foo", "bar")
self.assertEqual("foo", content_type.type)
self.assertEqual("bar", content_type.subtype)
self.assertEqual({}, content_type.parameters)
def test___init___with_parameters(self):
content_type = ContentType("foo", "bar", {"quux": "thing"})
self.assertEqual({"quux": "thing"}, content_type.parameters)
def test___eq__(self):
content_type1 = ContentType("foo", "bar", {"quux": "thing"})
content_type2 = ContentType("foo", "bar", {"quux": "thing"})
content_type3 = ContentType("foo", "bar", {"quux": "thing2"})
self.assertTrue(content_type1.__eq__(content_type2))
self.assertFalse(content_type1.__eq__(content_type3))
def test_basic_repr(self):
content_type = ContentType('text', 'plain')
self.assertThat(repr(content_type), Equals('text/plain'))
def test_extended_repr(self):
content_type = ContentType(
'text', 'plain', {'foo': 'bar', 'baz': 'qux'})
self.assertThat(
repr(content_type), Equals('text/plain; baz="qux"; foo="bar"'))
class TestBuiltinContentTypes(TestCase):
def test_plain_text(self):
# The UTF8_TEXT content type represents UTF-8 encoded text/plain.
self.assertThat(UTF8_TEXT.type, Equals('text'))
self.assertThat(UTF8_TEXT.subtype, Equals('plain'))
self.assertThat(UTF8_TEXT.parameters, Equals({'charset': 'utf8'}))
def test_json_content(self):
# The JSON content type represents implictly UTF-8 application/json.
self.assertThat(JSON.type, Equals('application'))
self.assertThat(JSON.subtype, Equals('json'))
self.assertThat(JSON.parameters, Equals({}))
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)

View File

@@ -1,100 +0,0 @@
# Copyright (c) 2010-2011 Testtools authors. See LICENSE for details.
"""Tests for the distutils test command logic."""
from distutils.dist import Distribution
from extras import try_import
from testtools.compat import (
_b,
_u,
BytesIO,
)
fixtures = try_import('fixtures')
import testtools
from testtools import TestCase
from testtools.distutilscmd import TestCommand
from testtools.matchers import MatchesRegex
if fixtures:
class SampleTestFixture(fixtures.Fixture):
"""Creates testtools.runexample temporarily."""
def __init__(self):
self.package = fixtures.PythonPackage(
'runexample', [('__init__.py', _b("""
from testtools import TestCase
class TestFoo(TestCase):
def test_bar(self):
pass
def test_quux(self):
pass
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)
"""))])
def setUp(self):
super(SampleTestFixture, self).setUp()
self.useFixture(self.package)
testtools.__path__.append(self.package.base)
self.addCleanup(testtools.__path__.remove, self.package.base)
class TestCommandTest(TestCase):
def setUp(self):
super(TestCommandTest, self).setUp()
if fixtures is None:
self.skipTest("Need fixtures")
def test_test_module(self):
self.useFixture(SampleTestFixture())
stdout = self.useFixture(fixtures.StringStream('stdout'))
dist = Distribution()
dist.script_name = 'setup.py'
dist.script_args = ['test']
dist.cmdclass = {'test': TestCommand}
dist.command_options = {
'test': {'test_module': ('command line', 'testtools.runexample')}}
with fixtures.MonkeyPatch('sys.stdout', stdout.stream):
cmd = dist.reinitialize_command('test')
dist.run_command('test')
self.assertThat(
stdout.getDetails()['stdout'].as_text(),
MatchesRegex(_u("""Tests running...
Ran 2 tests in \\d.\\d\\d\\ds
OK
""")))
def test_test_suite(self):
self.useFixture(SampleTestFixture())
stdout = self.useFixture(fixtures.StringStream('stdout'))
dist = Distribution()
dist.script_name = 'setup.py'
dist.script_args = ['test']
dist.cmdclass = {'test': TestCommand}
dist.command_options = {
'test': {
'test_suite': (
'command line', 'testtools.runexample.test_suite')}}
with fixtures.MonkeyPatch('sys.stdout', stdout.stream):
cmd = dist.reinitialize_command('test')
dist.run_command('test')
self.assertThat(
stdout.getDetails()['stdout'].as_text(),
MatchesRegex(_u("""Tests running...
Ran 2 tests in \\d.\\d\\d\\ds
OK
""")))
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)

View File

@@ -1,181 +0,0 @@
# Copyright (c) 2010-2011 testtools developers. See LICENSE for details.
import unittest
from extras import try_import
from testtools import (
TestCase,
content,
content_type,
)
from testtools.compat import _b, _u
from testtools.matchers import (
Contains,
Equals,
)
from testtools.testresult.doubles import (
ExtendedTestResult,
)
fixtures = try_import('fixtures')
LoggingFixture = try_import('fixtures.tests.helpers.LoggingFixture')
class TestFixtureSupport(TestCase):
def setUp(self):
super(TestFixtureSupport, self).setUp()
if fixtures is None or LoggingFixture is None:
self.skipTest("Need fixtures")
def test_useFixture(self):
fixture = LoggingFixture()
class SimpleTest(TestCase):
def test_foo(self):
self.useFixture(fixture)
result = unittest.TestResult()
SimpleTest('test_foo').run(result)
self.assertTrue(result.wasSuccessful())
self.assertEqual(['setUp', 'cleanUp'], fixture.calls)
def test_useFixture_cleanups_raise_caught(self):
calls = []
def raiser(ignored):
calls.append('called')
raise Exception('foo')
fixture = fixtures.FunctionFixture(lambda:None, raiser)
class SimpleTest(TestCase):
def test_foo(self):
self.useFixture(fixture)
result = unittest.TestResult()
SimpleTest('test_foo').run(result)
self.assertFalse(result.wasSuccessful())
self.assertEqual(['called'], calls)
def test_useFixture_details_captured(self):
class DetailsFixture(fixtures.Fixture):
def setUp(self):
fixtures.Fixture.setUp(self)
self.addCleanup(delattr, self, 'content')
self.content = [_b('content available until cleanUp')]
self.addDetail('content',
content.Content(content_type.UTF8_TEXT, self.get_content))
def get_content(self):
return self.content
fixture = DetailsFixture()
class SimpleTest(TestCase):
def test_foo(self):
self.useFixture(fixture)
# Add a colliding detail (both should show up)
self.addDetail('content',
content.Content(content_type.UTF8_TEXT, lambda:[_b('foo')]))
result = ExtendedTestResult()
SimpleTest('test_foo').run(result)
self.assertEqual('addSuccess', result._events[-2][0])
details = result._events[-2][2]
self.assertEqual(['content', 'content-1'], sorted(details.keys()))
self.assertEqual('foo', details['content'].as_text())
self.assertEqual('content available until cleanUp',
details['content-1'].as_text())
def test_useFixture_multiple_details_captured(self):
class DetailsFixture(fixtures.Fixture):
def setUp(self):
fixtures.Fixture.setUp(self)
self.addDetail('aaa', content.text_content("foo"))
self.addDetail('bbb', content.text_content("bar"))
fixture = DetailsFixture()
class SimpleTest(TestCase):
def test_foo(self):
self.useFixture(fixture)
result = ExtendedTestResult()
SimpleTest('test_foo').run(result)
self.assertEqual('addSuccess', result._events[-2][0])
details = result._events[-2][2]
self.assertEqual(['aaa', 'bbb'], sorted(details))
self.assertEqual(_u('foo'), details['aaa'].as_text())
self.assertEqual(_u('bar'), details['bbb'].as_text())
def test_useFixture_details_captured_from_setUp(self):
# Details added during fixture set-up are gathered even if setUp()
# fails with an exception.
class BrokenFixture(fixtures.Fixture):
def setUp(self):
fixtures.Fixture.setUp(self)
self.addDetail('content', content.text_content("foobar"))
raise Exception()
fixture = BrokenFixture()
class SimpleTest(TestCase):
def test_foo(self):
self.useFixture(fixture)
result = ExtendedTestResult()
SimpleTest('test_foo').run(result)
self.assertEqual('addError', result._events[-2][0])
details = result._events[-2][2]
self.assertEqual(['content', 'traceback'], sorted(details))
self.assertEqual('foobar', ''.join(details['content'].iter_text()))
def test_useFixture_details_captured_from__setUp(self):
# Newer Fixtures deprecates setUp() in favour of _setUp().
# https://bugs.launchpad.net/testtools/+bug/1469759 reports that
# this is broken when gathering details from a broken _setUp().
class BrokenFixture(fixtures.Fixture):
def _setUp(self):
fixtures.Fixture._setUp(self)
self.addDetail('broken', content.text_content("foobar"))
raise Exception("_setUp broke")
fixture = BrokenFixture()
class SimpleTest(TestCase):
def test_foo(self):
self.addDetail('foo_content', content.text_content("foo ok"))
self.useFixture(fixture)
result = ExtendedTestResult()
SimpleTest('test_foo').run(result)
self.assertEqual('addError', result._events[-2][0])
details = result._events[-2][2]
self.assertEqual(
['broken', 'foo_content', 'traceback', 'traceback-1'],
sorted(details))
self.expectThat(
''.join(details['broken'].iter_text()),
Equals('foobar'))
self.expectThat(
''.join(details['foo_content'].iter_text()),
Equals('foo ok'))
self.expectThat(
''.join(details['traceback'].iter_text()),
Contains('_setUp broke'))
self.expectThat(
''.join(details['traceback-1'].iter_text()),
Contains('foobar'))
def test_useFixture_original_exception_raised_if_gather_details_fails(self):
# In bug #1368440 it was reported that when a fixture fails setUp
# and gather_details errors on it, then the original exception that
# failed is not reported.
class BrokenFixture(fixtures.Fixture):
def getDetails(self):
raise AttributeError("getDetails broke")
def setUp(self):
fixtures.Fixture.setUp(self)
raise Exception("setUp broke")
fixture = BrokenFixture()
class SimpleTest(TestCase):
def test_foo(self):
self.useFixture(fixture)
result = ExtendedTestResult()
SimpleTest('test_foo').run(result)
self.assertEqual('addError', result._events[-2][0])
details = result._events[-2][2]
self.assertEqual(['traceback', 'traceback-1'], sorted(details))
self.assertThat(
''.join(details['traceback'].iter_text()),
Contains('setUp broke'))
self.assertThat(
''.join(details['traceback-1'].iter_text()),
Contains('getDetails broke'))
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)

View File

@@ -1,30 +0,0 @@
# Copyright (c) 2010-2012 testtools developers. See LICENSE for details.
from testtools import TestCase
from testtools.tests.helpers import (
FullStackRunTest,
hide_testtools_stack,
is_stack_hidden,
)
class TestStackHiding(TestCase):
run_tests_with = FullStackRunTest
def setUp(self):
super(TestStackHiding, self).setUp()
self.addCleanup(hide_testtools_stack, is_stack_hidden())
def test_is_stack_hidden_consistent_true(self):
hide_testtools_stack(True)
self.assertEqual(True, is_stack_hidden())
def test_is_stack_hidden_consistent_false(self):
hide_testtools_stack(False)
self.assertEqual(False, is_stack_hidden())
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)

View File

@@ -1,167 +0,0 @@
# Copyright (c) 2010 Twisted Matrix Laboratories.
# See LICENSE for details.
"""Tests for testtools.monkey."""
from testtools import TestCase
from testtools.matchers import MatchesException, Raises
from testtools.monkey import MonkeyPatcher, patch
class TestObj:
def __init__(self):
self.foo = 'foo value'
self.bar = 'bar value'
self.baz = 'baz value'
class MonkeyPatcherTest(TestCase):
"""
Tests for 'MonkeyPatcher' monkey-patching class.
"""
def setUp(self):
super(MonkeyPatcherTest, self).setUp()
self.test_object = TestObj()
self.original_object = TestObj()
self.monkey_patcher = MonkeyPatcher()
def test_empty(self):
# A monkey patcher without patches doesn't change a thing.
self.monkey_patcher.patch()
# We can't assert that all state is unchanged, but at least we can
# check our test object.
self.assertEquals(self.original_object.foo, self.test_object.foo)
self.assertEquals(self.original_object.bar, self.test_object.bar)
self.assertEquals(self.original_object.baz, self.test_object.baz)
def test_construct_with_patches(self):
# Constructing a 'MonkeyPatcher' with patches adds all of the given
# patches to the patch list.
patcher = MonkeyPatcher((self.test_object, 'foo', 'haha'),
(self.test_object, 'bar', 'hehe'))
patcher.patch()
self.assertEquals('haha', self.test_object.foo)
self.assertEquals('hehe', self.test_object.bar)
self.assertEquals(self.original_object.baz, self.test_object.baz)
def test_patch_existing(self):
# Patching an attribute that exists sets it to the value defined in the
# patch.
self.monkey_patcher.add_patch(self.test_object, 'foo', 'haha')
self.monkey_patcher.patch()
self.assertEquals(self.test_object.foo, 'haha')
def test_patch_non_existing(self):
# Patching a non-existing attribute sets it to the value defined in
# the patch.
self.monkey_patcher.add_patch(self.test_object, 'doesntexist', 'value')
self.monkey_patcher.patch()
self.assertEquals(self.test_object.doesntexist, 'value')
def test_restore_non_existing(self):
# Restoring a value that didn't exist before the patch deletes the
# value.
self.monkey_patcher.add_patch(self.test_object, 'doesntexist', 'value')
self.monkey_patcher.patch()
self.monkey_patcher.restore()
marker = object()
self.assertIs(marker, getattr(self.test_object, 'doesntexist', marker))
def test_patch_already_patched(self):
# Adding a patch for an object and attribute that already have a patch
# overrides the existing patch.
self.monkey_patcher.add_patch(self.test_object, 'foo', 'blah')
self.monkey_patcher.add_patch(self.test_object, 'foo', 'BLAH')
self.monkey_patcher.patch()
self.assertEquals(self.test_object.foo, 'BLAH')
self.monkey_patcher.restore()
self.assertEquals(self.test_object.foo, self.original_object.foo)
def test_restore_twice_is_a_no_op(self):
# Restoring an already-restored monkey patch is a no-op.
self.monkey_patcher.add_patch(self.test_object, 'foo', 'blah')
self.monkey_patcher.patch()
self.monkey_patcher.restore()
self.assertEquals(self.test_object.foo, self.original_object.foo)
self.monkey_patcher.restore()
self.assertEquals(self.test_object.foo, self.original_object.foo)
def test_run_with_patches_decoration(self):
# run_with_patches runs the given callable, passing in all arguments
# and keyword arguments, and returns the return value of the callable.
log = []
def f(a, b, c=None):
log.append((a, b, c))
return 'foo'
result = self.monkey_patcher.run_with_patches(f, 1, 2, c=10)
self.assertEquals('foo', result)
self.assertEquals([(1, 2, 10)], log)
def test_repeated_run_with_patches(self):
# We can call the same function with run_with_patches more than
# once. All patches apply for each call.
def f():
return (self.test_object.foo, self.test_object.bar,
self.test_object.baz)
self.monkey_patcher.add_patch(self.test_object, 'foo', 'haha')
result = self.monkey_patcher.run_with_patches(f)
self.assertEquals(
('haha', self.original_object.bar, self.original_object.baz),
result)
result = self.monkey_patcher.run_with_patches(f)
self.assertEquals(
('haha', self.original_object.bar, self.original_object.baz),
result)
def test_run_with_patches_restores(self):
# run_with_patches restores the original values after the function has
# executed.
self.monkey_patcher.add_patch(self.test_object, 'foo', 'haha')
self.assertEquals(self.original_object.foo, self.test_object.foo)
self.monkey_patcher.run_with_patches(lambda: None)
self.assertEquals(self.original_object.foo, self.test_object.foo)
def test_run_with_patches_restores_on_exception(self):
# run_with_patches restores the original values even when the function
# raises an exception.
def _():
self.assertEquals(self.test_object.foo, 'haha')
self.assertEquals(self.test_object.bar, 'blahblah')
raise RuntimeError("Something went wrong!")
self.monkey_patcher.add_patch(self.test_object, 'foo', 'haha')
self.monkey_patcher.add_patch(self.test_object, 'bar', 'blahblah')
self.assertThat(lambda:self.monkey_patcher.run_with_patches(_),
Raises(MatchesException(RuntimeError("Something went wrong!"))))
self.assertEquals(self.test_object.foo, self.original_object.foo)
self.assertEquals(self.test_object.bar, self.original_object.bar)
class TestPatchHelper(TestCase):
def test_patch_patches(self):
# patch(obj, name, value) sets obj.name to value.
test_object = TestObj()
patch(test_object, 'foo', 42)
self.assertEqual(42, test_object.foo)
def test_patch_returns_cleanup(self):
# patch(obj, name, value) returns a nullary callable that restores obj
# to its original state when run.
test_object = TestObj()
original = test_object.foo
cleanup = patch(test_object, 'foo', 42)
cleanup()
self.assertEqual(original, test_object.foo)
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)

View File

@@ -1,359 +0,0 @@
# Copyright (c) 2010 testtools developers. See LICENSE for details.
"""Tests for the test runner logic."""
import doctest
from unittest import TestSuite
import sys
from textwrap import dedent
from extras import try_import
fixtures = try_import('fixtures')
testresources = try_import('testresources')
import unittest2
import testtools
from testtools import TestCase, run, skipUnless
from testtools.compat import (
_b,
_u,
StringIO,
)
from testtools.matchers import (
Contains,
DocTestMatches,
MatchesRegex,
)
if fixtures:
class SampleTestFixture(fixtures.Fixture):
"""Creates testtools.runexample temporarily."""
def __init__(self, broken=False):
"""Create a SampleTestFixture.
:param broken: If True, the sample file will not be importable.
"""
if not broken:
init_contents = _b("""\
from testtools import TestCase
class TestFoo(TestCase):
def test_bar(self):
pass
def test_quux(self):
pass
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)
""")
else:
init_contents = b"class not in\n"
self.package = fixtures.PythonPackage(
'runexample', [('__init__.py', init_contents)])
def setUp(self):
super(SampleTestFixture, self).setUp()
self.useFixture(self.package)
testtools.__path__.append(self.package.base)
self.addCleanup(testtools.__path__.remove, self.package.base)
self.addCleanup(sys.modules.pop, 'testtools.runexample', None)
if fixtures and testresources:
class SampleResourcedFixture(fixtures.Fixture):
"""Creates a test suite that uses testresources."""
def __init__(self):
super(SampleResourcedFixture, self).__init__()
self.package = fixtures.PythonPackage(
'resourceexample', [('__init__.py', _b("""
from fixtures import Fixture
from testresources import (
FixtureResource,
OptimisingTestSuite,
ResourcedTestCase,
)
from testtools import TestCase
class Printer(Fixture):
def setUp(self):
super(Printer, self).setUp()
print('Setting up Printer')
def reset(self):
pass
class TestFoo(TestCase, ResourcedTestCase):
# When run, this will print just one Setting up Printer, unless the
# OptimisingTestSuite is not honoured, when one per test case will print.
resources=[('res', FixtureResource(Printer()))]
def test_bar(self):
pass
def test_foo(self):
pass
def test_quux(self):
pass
def test_suite():
from unittest import TestLoader
return OptimisingTestSuite(TestLoader().loadTestsFromName(__name__))
"""))])
def setUp(self):
super(SampleResourcedFixture, self).setUp()
self.useFixture(self.package)
self.addCleanup(testtools.__path__.remove, self.package.base)
testtools.__path__.append(self.package.base)
if fixtures:
class SampleLoadTestsPackage(fixtures.Fixture):
"""Creates a test suite package using load_tests."""
def __init__(self):
super(SampleLoadTestsPackage, self).__init__()
self.package = fixtures.PythonPackage(
'discoverexample', [('__init__.py', _b("""
from testtools import TestCase, clone_test_with_new_id
class TestExample(TestCase):
def test_foo(self):
pass
def load_tests(loader, tests, pattern):
tests.addTest(clone_test_with_new_id(tests._tests[1]._tests[0], "fred"))
return tests
"""))])
def setUp(self):
super(SampleLoadTestsPackage, self).setUp()
self.useFixture(self.package)
self.addCleanup(sys.path.remove, self.package.base)
class TestRun(TestCase):
def setUp(self):
super(TestRun, self).setUp()
if fixtures is None:
self.skipTest("Need fixtures")
def test_run_custom_list(self):
self.useFixture(SampleTestFixture())
tests = []
class CaptureList(run.TestToolsTestRunner):
def list(self, test):
tests.append(set([case.id() for case
in testtools.testsuite.iterate_tests(test)]))
out = StringIO()
try:
program = run.TestProgram(
argv=['prog', '-l', 'testtools.runexample.test_suite'],
stdout=out, testRunner=CaptureList)
except SystemExit:
exc_info = sys.exc_info()
raise AssertionError("-l tried to exit. %r" % exc_info[1])
self.assertEqual([set(['testtools.runexample.TestFoo.test_bar',
'testtools.runexample.TestFoo.test_quux'])], tests)
def test_run_list_with_loader(self):
# list() is attempted with a loader first.
self.useFixture(SampleTestFixture())
tests = []
class CaptureList(run.TestToolsTestRunner):
def list(self, test, loader=None):
tests.append(set([case.id() for case
in testtools.testsuite.iterate_tests(test)]))
tests.append(loader)
out = StringIO()
try:
program = run.TestProgram(
argv=['prog', '-l', 'testtools.runexample.test_suite'],
stdout=out, testRunner=CaptureList)
except SystemExit:
exc_info = sys.exc_info()
raise AssertionError("-l tried to exit. %r" % exc_info[1])
self.assertEqual([set(['testtools.runexample.TestFoo.test_bar',
'testtools.runexample.TestFoo.test_quux']), program.testLoader],
tests)
def test_run_list(self):
self.useFixture(SampleTestFixture())
out = StringIO()
try:
run.main(['prog', '-l', 'testtools.runexample.test_suite'], out)
except SystemExit:
exc_info = sys.exc_info()
raise AssertionError("-l tried to exit. %r" % exc_info[1])
self.assertEqual("""testtools.runexample.TestFoo.test_bar
testtools.runexample.TestFoo.test_quux
""", out.getvalue())
def test_run_list_failed_import(self):
broken = self.useFixture(SampleTestFixture(broken=True))
out = StringIO()
# XXX: http://bugs.python.org/issue22811
unittest2.defaultTestLoader._top_level_dir = None
exc = self.assertRaises(
SystemExit,
run.main, ['prog', 'discover', '-l', broken.package.base, '*.py'], out)
self.assertEqual(2, exc.args[0])
self.assertThat(out.getvalue(), DocTestMatches("""\
unittest2.loader._FailedTest.runexample
Failed to import test module: runexample
Traceback (most recent call last):
File ".../loader.py", line ..., in _find_test_path
package = self._get_module_from_name(name)
File ".../loader.py", line ..., in _get_module_from_name
__import__(name)
File ".../runexample/__init__.py", line 1
class not in
...^...
SyntaxError: invalid syntax
""", doctest.ELLIPSIS))
def test_run_orders_tests(self):
self.useFixture(SampleTestFixture())
out = StringIO()
# We load two tests - one that exists and one that doesn't, and we
# should get the one that exists and neither the one that doesn't nor
# the unmentioned one that does.
tempdir = self.useFixture(fixtures.TempDir())
tempname = tempdir.path + '/tests.list'
f = open(tempname, 'wb')
try:
f.write(_b("""
testtools.runexample.TestFoo.test_bar
testtools.runexample.missingtest
"""))
finally:
f.close()
try:
run.main(['prog', '-l', '--load-list', tempname,
'testtools.runexample.test_suite'], out)
except SystemExit:
exc_info = sys.exc_info()
raise AssertionError(
"-l --load-list tried to exit. %r" % exc_info[1])
self.assertEqual("""testtools.runexample.TestFoo.test_bar
""", out.getvalue())
def test_run_load_list(self):
self.useFixture(SampleTestFixture())
out = StringIO()
# We load two tests - one that exists and one that doesn't, and we
# should get the one that exists and neither the one that doesn't nor
# the unmentioned one that does.
tempdir = self.useFixture(fixtures.TempDir())
tempname = tempdir.path + '/tests.list'
f = open(tempname, 'wb')
try:
f.write(_b("""
testtools.runexample.TestFoo.test_bar
testtools.runexample.missingtest
"""))
finally:
f.close()
try:
run.main(['prog', '-l', '--load-list', tempname,
'testtools.runexample.test_suite'], out)
except SystemExit:
exc_info = sys.exc_info()
raise AssertionError(
"-l --load-list tried to exit. %r" % exc_info[1])
self.assertEqual("""testtools.runexample.TestFoo.test_bar
""", out.getvalue())
def test_load_list_preserves_custom_suites(self):
if testresources is None:
self.skipTest("Need testresources")
self.useFixture(SampleResourcedFixture())
# We load two tests, not loading one. Both share a resource, so we
# should see just one resource setup occur.
tempdir = self.useFixture(fixtures.TempDir())
tempname = tempdir.path + '/tests.list'
f = open(tempname, 'wb')
try:
f.write(_b("""
testtools.resourceexample.TestFoo.test_bar
testtools.resourceexample.TestFoo.test_foo
"""))
finally:
f.close()
stdout = self.useFixture(fixtures.StringStream('stdout'))
with fixtures.MonkeyPatch('sys.stdout', stdout.stream):
try:
run.main(['prog', '--load-list', tempname,
'testtools.resourceexample.test_suite'], stdout.stream)
except SystemExit:
# Evil resides in TestProgram.
pass
out = stdout.getDetails()['stdout'].as_text()
self.assertEqual(1, out.count('Setting up Printer'), "%r" % out)
def test_run_failfast(self):
stdout = self.useFixture(fixtures.StringStream('stdout'))
class Failing(TestCase):
def test_a(self):
self.fail('a')
def test_b(self):
self.fail('b')
with fixtures.MonkeyPatch('sys.stdout', stdout.stream):
runner = run.TestToolsTestRunner(failfast=True)
runner.run(TestSuite([Failing('test_a'), Failing('test_b')]))
self.assertThat(
stdout.getDetails()['stdout'].as_text(), Contains('Ran 1 test'))
def test_run_locals(self):
stdout = self.useFixture(fixtures.StringStream('stdout'))
class Failing(TestCase):
def test_a(self):
a = 1
self.fail('a')
runner = run.TestToolsTestRunner(tb_locals=True, stdout=stdout.stream)
runner.run(Failing('test_a'))
self.assertThat(
stdout.getDetails()['stdout'].as_text(), Contains('a = 1'))
def test_stdout_honoured(self):
self.useFixture(SampleTestFixture())
tests = []
out = StringIO()
exc = self.assertRaises(SystemExit, run.main,
argv=['prog', 'testtools.runexample.test_suite'],
stdout=out)
self.assertEqual((0,), exc.args)
self.assertThat(
out.getvalue(),
MatchesRegex(_u("""Tests running...
Ran 2 tests in \\d.\\d\\d\\ds
OK
""")))
@skipUnless(fixtures, "fixtures not present")
def test_issue_16662(self):
# unittest's discover implementation didn't handle load_tests on
# packages. That is fixed pending commit, but we want to offer it
# to all testtools users regardless of Python version.
# See http://bugs.python.org/issue16662
pkg = self.useFixture(SampleLoadTestsPackage())
out = StringIO()
# XXX: http://bugs.python.org/issue22811
unittest2.defaultTestLoader._top_level_dir = None
self.assertEqual(None, run.main(
['prog', 'discover', '-l', pkg.package.base], out))
self.assertEqual(dedent("""\
discoverexample.TestExample.test_foo
fred
"""), out.getvalue())
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)

View File

@@ -1,339 +0,0 @@
# Copyright (c) 2009-2011 testtools developers. See LICENSE for details.
"""Tests for the RunTest single test execution logic."""
from testtools import (
ExtendedToOriginalDecorator,
run_test_with,
RunTest,
TestCase,
TestResult,
)
from testtools.matchers import HasLength, MatchesException, Is, Raises
from testtools.testresult.doubles import ExtendedTestResult
from testtools.tests.helpers import FullStackRunTest
class TestRunTest(TestCase):
run_tests_with = FullStackRunTest
def make_case(self):
class Case(TestCase):
def test(self):
pass
return Case('test')
def test___init___short(self):
run = RunTest("bar")
self.assertEqual("bar", run.case)
self.assertEqual([], run.handlers)
def test__init____handlers(self):
handlers = [("quux", "baz")]
run = RunTest("bar", handlers)
self.assertEqual(handlers, run.handlers)
def test__init____handlers_last_resort(self):
handlers = [("quux", "baz")]
last_resort = "foo"
run = RunTest("bar", handlers, last_resort)
self.assertEqual(last_resort, run.last_resort)
def test_run_with_result(self):
# test.run passes result down to _run_test_method.
log = []
class Case(TestCase):
def _run_test_method(self, result):
log.append(result)
case = Case('_run_test_method')
run = RunTest(case, lambda x: log.append(x))
result = TestResult()
run.run(result)
self.assertEqual(1, len(log))
self.assertEqual(result, log[0].decorated)
def test_run_no_result_manages_new_result(self):
log = []
run = RunTest(self.make_case(), lambda x: log.append(x) or x)
result = run.run()
self.assertIsInstance(result.decorated, TestResult)
def test__run_core_called(self):
case = self.make_case()
log = []
run = RunTest(case, lambda x: x)
run._run_core = lambda: log.append('foo')
run.run()
self.assertEqual(['foo'], log)
def test__run_prepared_result_does_not_mask_keyboard(self):
tearDownRuns = []
class Case(TestCase):
def test(self):
raise KeyboardInterrupt("go")
def _run_teardown(self, result):
tearDownRuns.append(self)
return super(Case, self)._run_teardown(result)
case = Case('test')
run = RunTest(case)
run.result = ExtendedTestResult()
self.assertThat(lambda: run._run_prepared_result(run.result),
Raises(MatchesException(KeyboardInterrupt)))
self.assertEqual(
[('startTest', case), ('stopTest', case)], run.result._events)
# tearDown is still run though!
self.assertThat(tearDownRuns, HasLength(1))
def test__run_user_calls_onException(self):
case = self.make_case()
log = []
def handler(exc_info):
log.append("got it")
self.assertEqual(3, len(exc_info))
self.assertIsInstance(exc_info[1], KeyError)
self.assertIs(KeyError, exc_info[0])
case.addOnException(handler)
e = KeyError('Yo')
def raises():
raise e
run = RunTest(case, [(KeyError, None)])
run.result = ExtendedTestResult()
status = run._run_user(raises)
self.assertEqual(run.exception_caught, status)
self.assertEqual([], run.result._events)
self.assertEqual(["got it"], log)
def test__run_user_can_catch_Exception(self):
case = self.make_case()
e = Exception('Yo')
def raises():
raise e
log = []
run = RunTest(case, [(Exception, None)])
run.result = ExtendedTestResult()
status = run._run_user(raises)
self.assertEqual(run.exception_caught, status)
self.assertEqual([], run.result._events)
self.assertEqual([], log)
def test__run_prepared_result_uncaught_Exception_raised(self):
e = KeyError('Yo')
class Case(TestCase):
def test(self):
raise e
case = Case('test')
log = []
def log_exc(self, result, err):
log.append((result, err))
run = RunTest(case, [(ValueError, log_exc)])
run.result = ExtendedTestResult()
self.assertThat(lambda: run._run_prepared_result(run.result),
Raises(MatchesException(KeyError)))
self.assertEqual(
[('startTest', case), ('stopTest', case)], run.result._events)
self.assertEqual([], log)
def test__run_prepared_result_uncaught_Exception_triggers_error(self):
# https://bugs.launchpad.net/testtools/+bug/1364188
# When something isn't handled, the test that was
# executing has errored, one way or another.
e = SystemExit(0)
class Case(TestCase):
def test(self):
raise e
case = Case('test')
log = []
def log_exc(self, result, err):
log.append((result, err))
run = RunTest(case, [], log_exc)
run.result = ExtendedTestResult()
self.assertThat(lambda: run._run_prepared_result(run.result),
Raises(MatchesException(SystemExit)))
self.assertEqual(
[('startTest', case), ('stopTest', case)], run.result._events)
self.assertEqual([(run.result, e)], log)
def test__run_user_uncaught_Exception_from_exception_handler_raised(self):
case = self.make_case()
def broken_handler(exc_info):
# ValueError because thats what we know how to catch - and must
# not.
raise ValueError('boo')
case.addOnException(broken_handler)
e = KeyError('Yo')
def raises():
raise e
log = []
def log_exc(self, result, err):
log.append((result, err))
run = RunTest(case, [(ValueError, log_exc)])
run.result = ExtendedTestResult()
self.assertThat(lambda: run._run_user(raises),
Raises(MatchesException(ValueError)))
self.assertEqual([], run.result._events)
self.assertEqual([], log)
def test__run_user_returns_result(self):
case = self.make_case()
def returns():
return 1
run = RunTest(case)
run.result = ExtendedTestResult()
self.assertEqual(1, run._run_user(returns))
self.assertEqual([], run.result._events)
def test__run_one_decorates_result(self):
log = []
class Run(RunTest):
def _run_prepared_result(self, result):
log.append(result)
return result
run = Run(self.make_case(), lambda x: x)
result = run._run_one('foo')
self.assertEqual([result], log)
self.assertIsInstance(log[0], ExtendedToOriginalDecorator)
self.assertEqual('foo', result.decorated)
def test__run_prepared_result_calls_start_and_stop_test(self):
result = ExtendedTestResult()
case = self.make_case()
run = RunTest(case, lambda x: x)
run.run(result)
self.assertEqual([
('startTest', case),
('addSuccess', case),
('stopTest', case),
], result._events)
def test__run_prepared_result_calls_stop_test_always(self):
result = ExtendedTestResult()
case = self.make_case()
def inner():
raise Exception("foo")
run = RunTest(case, lambda x: x)
run._run_core = inner
self.assertThat(lambda: run.run(result),
Raises(MatchesException(Exception("foo"))))
self.assertEqual([
('startTest', case),
('stopTest', case),
], result._events)
class CustomRunTest(RunTest):
marker = object()
def run(self, result=None):
return self.marker
class TestTestCaseSupportForRunTest(TestCase):
def test_pass_custom_run_test(self):
class SomeCase(TestCase):
def test_foo(self):
pass
result = TestResult()
case = SomeCase('test_foo', runTest=CustomRunTest)
from_run_test = case.run(result)
self.assertThat(from_run_test, Is(CustomRunTest.marker))
def test_default_is_runTest_class_variable(self):
class SomeCase(TestCase):
run_tests_with = CustomRunTest
def test_foo(self):
pass
result = TestResult()
case = SomeCase('test_foo')
from_run_test = case.run(result)
self.assertThat(from_run_test, Is(CustomRunTest.marker))
def test_constructor_argument_overrides_class_variable(self):
# If a 'runTest' argument is passed to the test's constructor, that
# overrides the class variable.
marker = object()
class DifferentRunTest(RunTest):
def run(self, result=None):
return marker
class SomeCase(TestCase):
run_tests_with = CustomRunTest
def test_foo(self):
pass
result = TestResult()
case = SomeCase('test_foo', runTest=DifferentRunTest)
from_run_test = case.run(result)
self.assertThat(from_run_test, Is(marker))
def test_decorator_for_run_test(self):
# Individual test methods can be marked as needing a special runner.
class SomeCase(TestCase):
@run_test_with(CustomRunTest)
def test_foo(self):
pass
result = TestResult()
case = SomeCase('test_foo')
from_run_test = case.run(result)
self.assertThat(from_run_test, Is(CustomRunTest.marker))
def test_extended_decorator_for_run_test(self):
# Individual test methods can be marked as needing a special runner.
# Extra arguments can be passed to the decorator which will then be
# passed on to the RunTest object.
marker = object()
class FooRunTest(RunTest):
def __init__(self, case, handlers=None, bar=None):
super(FooRunTest, self).__init__(case, handlers)
self.bar = bar
def run(self, result=None):
return self.bar
class SomeCase(TestCase):
@run_test_with(FooRunTest, bar=marker)
def test_foo(self):
pass
result = TestResult()
case = SomeCase('test_foo')
from_run_test = case.run(result)
self.assertThat(from_run_test, Is(marker))
def test_works_as_inner_decorator(self):
# Even if run_test_with is the innermost decorator, it will be
# respected.
def wrapped(function):
"""Silly, trivial decorator."""
def decorated(*args, **kwargs):
return function(*args, **kwargs)
decorated.__name__ = function.__name__
decorated.__dict__.update(function.__dict__)
return decorated
class SomeCase(TestCase):
@wrapped
@run_test_with(CustomRunTest)
def test_foo(self):
pass
result = TestResult()
case = SomeCase('test_foo')
from_run_test = case.run(result)
self.assertThat(from_run_test, Is(CustomRunTest.marker))
def test_constructor_overrides_decorator(self):
# If a 'runTest' argument is passed to the test's constructor, that
# overrides the decorator.
marker = object()
class DifferentRunTest(RunTest):
def run(self, result=None):
return marker
class SomeCase(TestCase):
@run_test_with(CustomRunTest)
def test_foo(self):
pass
result = TestResult()
case = SomeCase('test_foo', runTest=DifferentRunTest)
from_run_test = case.run(result)
self.assertThat(from_run_test, Is(marker))
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)

View File

@@ -1,84 +0,0 @@
# Copyright (c) 2012 testtools developers. See LICENSE for details.
"""Test tag support."""
from testtools import TestCase
from testtools.tags import TagContext
class TestTags(TestCase):
def test_no_tags(self):
# A tag context has no tags initially.
tag_context = TagContext()
self.assertEqual(set(), tag_context.get_current_tags())
def test_add_tag(self):
# A tag added with change_tags appears in get_current_tags.
tag_context = TagContext()
tag_context.change_tags(set(['foo']), set())
self.assertEqual(set(['foo']), tag_context.get_current_tags())
def test_add_tag_twice(self):
# Calling change_tags twice to add tags adds both tags to the current
# tags.
tag_context = TagContext()
tag_context.change_tags(set(['foo']), set())
tag_context.change_tags(set(['bar']), set())
self.assertEqual(
set(['foo', 'bar']), tag_context.get_current_tags())
def test_change_tags_returns_tags(self):
# change_tags returns the current tags. This is a convenience.
tag_context = TagContext()
tags = tag_context.change_tags(set(['foo']), set())
self.assertEqual(set(['foo']), tags)
def test_remove_tag(self):
# change_tags can remove tags from the context.
tag_context = TagContext()
tag_context.change_tags(set(['foo']), set())
tag_context.change_tags(set(), set(['foo']))
self.assertEqual(set(), tag_context.get_current_tags())
def test_child_context(self):
# A TagContext can have a parent. If so, its tags are the tags of the
# parent at the moment of construction.
parent = TagContext()
parent.change_tags(set(['foo']), set())
child = TagContext(parent)
self.assertEqual(
parent.get_current_tags(), child.get_current_tags())
def test_add_to_child(self):
# Adding a tag to the child context doesn't affect the parent.
parent = TagContext()
parent.change_tags(set(['foo']), set())
child = TagContext(parent)
child.change_tags(set(['bar']), set())
self.assertEqual(set(['foo', 'bar']), child.get_current_tags())
self.assertEqual(set(['foo']), parent.get_current_tags())
def test_remove_in_child(self):
# A tag that was in the parent context can be removed from the child
# context without affect the parent.
parent = TagContext()
parent.change_tags(set(['foo']), set())
child = TagContext(parent)
child.change_tags(set(), set(['foo']))
self.assertEqual(set(), child.get_current_tags())
self.assertEqual(set(['foo']), parent.get_current_tags())
def test_parent(self):
# The parent can be retrieved from a child context.
parent = TagContext()
parent.change_tags(set(['foo']), set())
child = TagContext(parent)
child.change_tags(set(), set(['foo']))
self.assertEqual(parent, child.parent)
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,341 +0,0 @@
# Copyright (c) 2009-2015 testtools developers. See LICENSE for details.
"""Test ConcurrentTestSuite and related things."""
import doctest
from pprint import pformat
import unittest
import unittest2
from extras import try_import
from testtools import (
ConcurrentTestSuite,
ConcurrentStreamTestSuite,
iterate_tests,
PlaceHolder,
TestByTestResult,
TestCase,
)
from testtools.compat import _u
from testtools.matchers import DocTestMatches, Equals
from testtools.testsuite import FixtureSuite, sorted_tests
from testtools.tests.helpers import LoggingResult
from testtools.testresult.doubles import StreamResult as LoggingStream
FunctionFixture = try_import('fixtures.FunctionFixture')
class Sample(TestCase):
def __hash__(self):
return id(self)
def test_method1(self):
pass
def test_method2(self):
pass
class TestConcurrentTestSuiteRun(TestCase):
def test_broken_test(self):
log = []
def on_test(test, status, start_time, stop_time, tags, details):
log.append((test.id(), status, set(details.keys())))
class BrokenTest(object):
# Simple break - no result parameter to run()
def __call__(self):
pass
run = __call__
original_suite = unittest.TestSuite([BrokenTest()])
suite = ConcurrentTestSuite(original_suite, self.split_suite)
suite.run(TestByTestResult(on_test))
self.assertEqual([('broken-runner', 'error', set(['traceback']))], log)
def test_trivial(self):
log = []
result = LoggingResult(log)
test1 = Sample('test_method1')
test2 = Sample('test_method2')
original_suite = unittest.TestSuite([test1, test2])
suite = ConcurrentTestSuite(original_suite, self.split_suite)
suite.run(result)
# log[0] is the timestamp for the first test starting.
test1 = log[1][1]
test2 = log[-1][1]
self.assertIsInstance(test1, Sample)
self.assertIsInstance(test2, Sample)
self.assertNotEqual(test1.id(), test2.id())
def test_wrap_result(self):
# ConcurrentTestSuite has a hook for wrapping the per-thread result.
wrap_log = []
def wrap_result(thread_safe_result, thread_number):
wrap_log.append(
(thread_safe_result.result.decorated, thread_number))
return thread_safe_result
result_log = []
result = LoggingResult(result_log)
test1 = Sample('test_method1')
test2 = Sample('test_method2')
original_suite = unittest.TestSuite([test1, test2])
suite = ConcurrentTestSuite(
original_suite, self.split_suite, wrap_result=wrap_result)
suite.run(result)
self.assertEqual(
[(result, 0),
(result, 1),
], wrap_log)
# Smoke test to make sure everything ran OK.
self.assertNotEqual([], result_log)
def split_suite(self, suite):
return list(iterate_tests(suite))
class TestConcurrentStreamTestSuiteRun(TestCase):
def test_trivial(self):
result = LoggingStream()
test1 = Sample('test_method1')
test2 = Sample('test_method2')
cases = lambda:[(test1, '0'), (test2, '1')]
suite = ConcurrentStreamTestSuite(cases)
suite.run(result)
def freeze(set_or_none):
if set_or_none is None:
return set_or_none
return frozenset(set_or_none)
# Ignore event order: we're testing the code is all glued together,
# which just means we can pump events through and they get route codes
# added appropriately.
self.assertEqual(set([
('status',
'testtools.tests.test_testsuite.Sample.test_method1',
'inprogress',
None,
True,
None,
None,
False,
None,
'0',
None,
),
('status',
'testtools.tests.test_testsuite.Sample.test_method1',
'success',
frozenset(),
True,
None,
None,
False,
None,
'0',
None,
),
('status',
'testtools.tests.test_testsuite.Sample.test_method2',
'inprogress',
None,
True,
None,
None,
False,
None,
'1',
None,
),
('status',
'testtools.tests.test_testsuite.Sample.test_method2',
'success',
frozenset(),
True,
None,
None,
False,
None,
'1',
None,
),
]), set(event[0:3] + (freeze(event[3]),) + event[4:10] + (None,)
for event in result._events))
def test_broken_runner(self):
# If the object called breaks, the stream is informed about it
# regardless.
class BrokenTest(object):
# broken - no result parameter!
def __call__(self):
pass
def run(self):
pass
result = LoggingStream()
cases = lambda:[(BrokenTest(), '0')]
suite = ConcurrentStreamTestSuite(cases)
suite.run(result)
events = result._events
# Check the traceback loosely.
self.assertEqual(events[1][6].decode('utf8'),
"Traceback (most recent call last):\n")
self.assertThat(events[2][6].decode('utf8'), DocTestMatches("""\
File "...testtools/testsuite.py", line ..., in _run_test
test.run(process_result)
""", doctest.ELLIPSIS))
self.assertThat(events[3][6].decode('utf8'), DocTestMatches("""\
TypeError: run() takes ...1 ...argument...2...given...
""", doctest.ELLIPSIS))
events = [event[0:10] + (None,) for event in events]
events[1] = events[1][:6] + (None,) + events[1][7:]
events[2] = events[2][:6] + (None,) + events[2][7:]
events[3] = events[3][:6] + (None,) + events[3][7:]
self.assertEqual([
('status', "broken-runner-'0'", 'inprogress', None, True, None, None, False, None, _u('0'), None),
('status', "broken-runner-'0'", None, None, True, 'traceback', None,
False,
'text/x-traceback; charset="utf8"; language="python"',
'0',
None),
('status', "broken-runner-'0'", None, None, True, 'traceback', None,
False,
'text/x-traceback; charset="utf8"; language="python"',
'0',
None),
('status', "broken-runner-'0'", None, None, True, 'traceback', None,
True,
'text/x-traceback; charset="utf8"; language="python"',
'0',
None),
('status', "broken-runner-'0'", 'fail', set(), True, None, None, False, None, _u('0'), None)
], events)
def split_suite(self, suite):
tests = list(enumerate(iterate_tests(suite)))
return [(test, _u(str(pos))) for pos, test in tests]
def test_setupclass_skip(self):
# We should support setupclass skipping using cls.skipException.
# Because folk have used that.
class Skips(TestCase):
@classmethod
def setUpClass(cls):
raise cls.skipException('foo')
def test_notrun(self):
pass
# Test discovery uses the default suite from unittest2 (unless users
# deliberately change things, in which case they keep both pieces).
suite = unittest2.TestSuite([Skips("test_notrun")])
log = []
result = LoggingResult(log)
suite.run(result)
self.assertEqual(['addSkip'], [item[0] for item in log])
def test_setupclass_upcall(self):
# Note that this is kindof-a-case-test, kindof-suite, because
# setUpClass is linked between them.
class Simples(TestCase):
@classmethod
def setUpClass(cls):
super(Simples, cls).setUpClass()
def test_simple(self):
pass
# Test discovery uses the default suite from unittest2 (unless users
# deliberately change things, in which case they keep both pieces).
suite = unittest2.TestSuite([Simples("test_simple")])
log = []
result = LoggingResult(log)
suite.run(result)
self.assertEqual(
['startTest', 'addSuccess', 'stopTest'],
[item[0] for item in log])
class TestFixtureSuite(TestCase):
def setUp(self):
super(TestFixtureSuite, self).setUp()
if FunctionFixture is None:
self.skip("Need fixtures")
def test_fixture_suite(self):
log = []
class Sample(TestCase):
def test_one(self):
log.append(1)
def test_two(self):
log.append(2)
fixture = FunctionFixture(
lambda: log.append('setUp'),
lambda fixture: log.append('tearDown'))
suite = FixtureSuite(fixture, [Sample('test_one'), Sample('test_two')])
suite.run(LoggingResult([]))
self.assertEqual(['setUp', 1, 2, 'tearDown'], log)
def test_fixture_suite_sort(self):
log = []
class Sample(TestCase):
def test_one(self):
log.append(1)
def test_two(self):
log.append(2)
fixture = FunctionFixture(
lambda: log.append('setUp'),
lambda fixture: log.append('tearDown'))
suite = FixtureSuite(fixture, [Sample('test_one'), Sample('test_one')])
self.assertRaises(ValueError, suite.sort_tests)
class TestSortedTests(TestCase):
def test_sorts_custom_suites(self):
a = PlaceHolder('a')
b = PlaceHolder('b')
class Subclass(unittest.TestSuite):
def sort_tests(self):
self._tests = sorted_tests(self, True)
input_suite = Subclass([b, a])
suite = sorted_tests(input_suite)
self.assertEqual([a, b], list(iterate_tests(suite)))
self.assertEqual([input_suite], list(iter(suite)))
def test_custom_suite_without_sort_tests_works(self):
a = PlaceHolder('a')
b = PlaceHolder('b')
class Subclass(unittest.TestSuite):pass
input_suite = Subclass([b, a])
suite = sorted_tests(input_suite)
self.assertEqual([b, a], list(iterate_tests(suite)))
self.assertEqual([input_suite], list(iter(suite)))
def test_sorts_simple_suites(self):
a = PlaceHolder('a')
b = PlaceHolder('b')
suite = sorted_tests(unittest.TestSuite([b, a]))
self.assertEqual([a, b], list(iterate_tests(suite)))
def test_duplicate_simple_suites(self):
a = PlaceHolder('a')
b = PlaceHolder('b')
c = PlaceHolder('a')
self.assertRaises(
ValueError, sorted_tests, unittest.TestSuite([a, b, c]))
def test_multiple_duplicates(self):
# If there are multiple duplicates on a test suite, we report on them
# all.
a = PlaceHolder('a')
b = PlaceHolder('b')
c = PlaceHolder('a')
d = PlaceHolder('b')
error = self.assertRaises(
ValueError, sorted_tests, unittest.TestSuite([a, b, c, d]))
self.assertThat(
str(error),
Equals("Duplicate test ids detected: %s" % (
pformat({'a': 2, 'b': 2}),)))
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)

View File

@@ -1,93 +0,0 @@
# Copyright (c) 2011 testtools developers. See LICENSE for details.
from __future__ import with_statement
import sys
from testtools import (
ExpectedException,
TestCase,
)
from testtools.matchers import (
AfterPreprocessing,
Equals,
EndsWith,
)
class TestExpectedException(TestCase):
"""Test the ExpectedException context manager."""
def test_pass_on_raise(self):
with ExpectedException(ValueError, 'tes.'):
raise ValueError('test')
def test_pass_on_raise_matcher(self):
with ExpectedException(
ValueError, AfterPreprocessing(str, Equals('test'))):
raise ValueError('test')
def test_raise_on_text_mismatch(self):
try:
with ExpectedException(ValueError, 'tes.'):
raise ValueError('mismatch')
except AssertionError:
e = sys.exc_info()[1]
self.assertEqual("'mismatch' does not match /tes./", str(e))
else:
self.fail('AssertionError not raised.')
def test_raise_on_general_mismatch(self):
matcher = AfterPreprocessing(str, Equals('test'))
value_error = ValueError('mismatch')
try:
with ExpectedException(ValueError, matcher):
raise value_error
except AssertionError:
e = sys.exc_info()[1]
self.assertEqual(matcher.match(value_error).describe(), str(e))
else:
self.fail('AssertionError not raised.')
def test_raise_on_error_mismatch(self):
try:
with ExpectedException(TypeError, 'tes.'):
raise ValueError('mismatch')
except ValueError:
e = sys.exc_info()[1]
self.assertEqual('mismatch', str(e))
else:
self.fail('ValueError not raised.')
def test_raise_if_no_exception(self):
try:
with ExpectedException(TypeError, 'tes.'):
pass
except AssertionError:
e = sys.exc_info()[1]
self.assertEqual('TypeError not raised.', str(e))
else:
self.fail('AssertionError not raised.')
def test_pass_on_raise_any_message(self):
with ExpectedException(ValueError):
raise ValueError('whatever')
def test_annotate(self):
def die():
with ExpectedException(ValueError, msg="foo"):
pass
exc = self.assertRaises(AssertionError, die)
self.assertThat(exc.args[0], EndsWith(': foo'))
def test_annotated_matcher(self):
def die():
with ExpectedException(ValueError, 'bar', msg="foo"):
pass
exc = self.assertRaises(AssertionError, die)
self.assertThat(exc.args[0], EndsWith(': foo'))
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)

View File

@@ -1,20 +0,0 @@
# Copyright (c) testtools developers. See LICENSE for details.
from unittest import TestSuite
def test_suite():
from testtools.tests.twistedsupport import (
test_deferred,
test_matchers,
test_runtest,
test_spinner,
)
modules = [
test_deferred,
test_matchers,
test_runtest,
test_spinner,
]
suites = map(lambda x: x.test_suite(), modules)
return TestSuite(suites)

View File

@@ -1,18 +0,0 @@
# Copyright (c) 2010, 2016 testtools developers. See LICENSE for details.
__all__ = [
'NeedsTwistedTestCase',
]
from extras import try_import
from testtools import TestCase
defer = try_import('twisted.internet.defer')
class NeedsTwistedTestCase(TestCase):
def setUp(self):
super(NeedsTwistedTestCase, self).setUp()
if defer is None:
self.skipTest("Need Twisted to run")

View File

@@ -1,56 +0,0 @@
# Copyright (c) testtools developers. See LICENSE for details.
"""Tests for testtools._deferred."""
from extras import try_import
from testtools.matchers import (
Equals,
MatchesException,
Raises,
)
from ._helpers import NeedsTwistedTestCase
DeferredNotFired = try_import(
'testtools.twistedsupport._deferred.DeferredNotFired')
extract_result = try_import(
'testtools.twistedsupport._deferred.extract_result')
defer = try_import('twisted.internet.defer')
Failure = try_import('twisted.python.failure.Failure')
class TestExtractResult(NeedsTwistedTestCase):
"""Tests for ``extract_result``."""
def test_not_fired(self):
# _spinner.extract_result raises _spinner.DeferredNotFired if it's
# given a Deferred that has not fired.
self.assertThat(
lambda: extract_result(defer.Deferred()),
Raises(MatchesException(DeferredNotFired)))
def test_success(self):
# _spinner.extract_result returns the value of the Deferred if it has
# fired successfully.
marker = object()
d = defer.succeed(marker)
self.assertThat(extract_result(d), Equals(marker))
def test_failure(self):
# _spinner.extract_result raises the failure's exception if it's given
# a Deferred that is failing.
try:
1/0
except ZeroDivisionError:
f = Failure()
d = defer.fail(f)
self.assertThat(
lambda: extract_result(d),
Raises(MatchesException(ZeroDivisionError)))
def test_suite():
from unittest2 import TestLoader, TestSuite
return TestLoader().loadTestsFromName(__name__)

View File

@@ -1,209 +0,0 @@
# Copyright (c) testtools developers. See LICENSE for details.
"""Tests for Deferred matchers."""
from extras import try_import
from testtools.compat import _u
from testtools.content import TracebackContent
from testtools.matchers import (
AfterPreprocessing,
Equals,
Is,
MatchesDict,
)
from ._helpers import NeedsTwistedTestCase
has_no_result = try_import('testtools.twistedsupport.has_no_result')
failed = try_import('testtools.twistedsupport.failed')
succeeded = try_import('testtools.twistedsupport.succeeded')
defer = try_import('twisted.internet.defer')
Failure = try_import('twisted.python.failure.Failure')
def mismatches(description, details=None):
"""Match a ``Mismatch`` object."""
if details is None:
details = Equals({})
matcher = MatchesDict({
'description': description,
'details': details,
})
def get_mismatch_info(mismatch):
return {
'description': mismatch.describe(),
'details': mismatch.get_details(),
}
return AfterPreprocessing(get_mismatch_info, matcher)
def make_failure(exc_value):
"""Raise ``exc_value`` and return the failure."""
try:
raise exc_value
except:
return Failure()
class NoResultTests(NeedsTwistedTestCase):
"""Tests for ``has_no_result``."""
def match(self, thing):
return has_no_result().match(thing)
def test_unfired_matches(self):
# A Deferred that hasn't fired matches has_no_result().
self.assertThat(self.match(defer.Deferred()), Is(None))
def test_succeeded_does_no_match(self):
# A Deferred that's fired successfully does not match has_no_result().
result = object()
deferred = defer.succeed(result)
mismatch = self.match(deferred)
self.assertThat(
mismatch, mismatches(Equals(_u(
'No result expected on %r, found %r instead'
% (deferred, result)))))
def test_failed_does_not_match(self):
# A Deferred that's failed does not match has_no_result().
fail = make_failure(RuntimeError('arbitrary failure'))
deferred = defer.fail(fail)
# Suppress unhandled error in Deferred.
self.addCleanup(deferred.addErrback, lambda _: None)
mismatch = self.match(deferred)
self.assertThat(
mismatch, mismatches(Equals(_u(
'No result expected on %r, found %r instead'
% (deferred, fail)))))
def test_success_after_assertion(self):
# We can create a Deferred, assert that it hasn't fired, then fire it
# and collect the result.
deferred = defer.Deferred()
self.assertThat(deferred, has_no_result())
results = []
deferred.addCallback(results.append)
marker = object()
deferred.callback(marker)
self.assertThat(results, Equals([marker]))
def test_failure_after_assertion(self):
# We can create a Deferred, assert that it hasn't fired, then fire it
# with a failure and collect the result.
deferred = defer.Deferred()
self.assertThat(deferred, has_no_result())
results = []
deferred.addErrback(results.append)
fail = make_failure(RuntimeError('arbitrary failure'))
deferred.errback(fail)
self.assertThat(results, Equals([fail]))
class SuccessResultTests(NeedsTwistedTestCase):
def match(self, matcher, value):
return succeeded(matcher).match(value)
def test_succeeded_result_passes(self):
# A Deferred that has fired successfully matches against the value it
# was fired with.
result = object()
deferred = defer.succeed(result)
self.assertThat(self.match(Is(result), deferred), Is(None))
def test_different_succeeded_result_fails(self):
# A Deferred that has fired successfully matches against the value it
# was fired with.
result = object()
deferred = defer.succeed(result)
matcher = Is(None) # Something that doesn't match `result`.
mismatch = matcher.match(result)
self.assertThat(
self.match(matcher, deferred),
mismatches(Equals(mismatch.describe()),
Equals(mismatch.get_details())))
def test_not_fired_fails(self):
# A Deferred that has not yet fired fails to match.
deferred = defer.Deferred()
arbitrary_matcher = Is(None)
self.assertThat(
self.match(arbitrary_matcher, deferred),
mismatches(
Equals(_u('Success result expected on %r, found no result '
'instead') % (deferred,))))
def test_failing_fails(self):
# A Deferred that has fired with a failure fails to match.
deferred = defer.Deferred()
fail = make_failure(RuntimeError('arbitrary failure'))
deferred.errback(fail)
arbitrary_matcher = Is(None)
self.assertThat(
self.match(arbitrary_matcher, deferred),
mismatches(
Equals(
_u('Success result expected on %r, found failure result '
'instead: %r' % (deferred, fail))),
Equals({'traceback': TracebackContent(
(fail.type, fail.value, fail.getTracebackObject()), None,
)}),
))
class FailureResultTests(NeedsTwistedTestCase):
def match(self, matcher, value):
return failed(matcher).match(value)
def test_failure_passes(self):
# A Deferred that has fired with a failure matches against the value
# it was fired with.
fail = make_failure(RuntimeError('arbitrary failure'))
deferred = defer.fail(fail)
self.assertThat(self.match(Is(fail), deferred), Is(None))
def test_different_failure_fails(self):
# A Deferred that has fired with a failure matches against the value
# it was fired with.
fail = make_failure(RuntimeError('arbitrary failure'))
deferred = defer.fail(fail)
matcher = Is(None) # Something that doesn't match `fail`.
mismatch = matcher.match(fail)
self.assertThat(
self.match(matcher, deferred),
mismatches(Equals(mismatch.describe()),
Equals(mismatch.get_details())))
def test_success_fails(self):
# A Deferred that has fired successfully fails to match.
result = object()
deferred = defer.succeed(result)
matcher = Is(None) # Can be any matcher
self.assertThat(
self.match(matcher, deferred),
mismatches(Equals(_u(
'Failure result expected on %r, found success '
'result (%r) instead' % (deferred, result)))))
def test_no_result_fails(self):
# A Deferred that has not fired fails to match.
deferred = defer.Deferred()
matcher = Is(None) # Can be any matcher
self.assertThat(
self.match(matcher, deferred),
mismatches(Equals(_u(
'Failure result expected on %r, found no result instead'
% (deferred,)))))
def test_suite():
from unittest2 import TestLoader, TestSuite
return TestLoader().loadTestsFromName(__name__)

File diff suppressed because it is too large Load Diff

View File

@@ -1,327 +0,0 @@
# Copyright (c) testtools developers. See LICENSE for details.
"""Tests for the evil Twisted reactor-spinning we do."""
import os
import signal
from extras import try_import
from testtools import skipIf
from testtools.matchers import (
Equals,
Is,
MatchesException,
Raises,
)
from ._helpers import NeedsTwistedTestCase
_spinner = try_import('testtools.twistedsupport._spinner')
defer = try_import('twisted.internet.defer')
Failure = try_import('twisted.python.failure.Failure')
class TestNotReentrant(NeedsTwistedTestCase):
def test_not_reentrant(self):
# A function decorated as not being re-entrant will raise a
# _spinner.ReentryError if it is called while it is running.
calls = []
@_spinner.not_reentrant
def log_something():
calls.append(None)
if len(calls) < 5:
log_something()
self.assertThat(
log_something, Raises(MatchesException(_spinner.ReentryError)))
self.assertEqual(1, len(calls))
def test_deeper_stack(self):
calls = []
@_spinner.not_reentrant
def g():
calls.append(None)
if len(calls) < 5:
f()
@_spinner.not_reentrant
def f():
calls.append(None)
if len(calls) < 5:
g()
self.assertThat(f, Raises(MatchesException(_spinner.ReentryError)))
self.assertEqual(2, len(calls))
class TestTrapUnhandledErrors(NeedsTwistedTestCase):
def test_no_deferreds(self):
marker = object()
result, errors = _spinner.trap_unhandled_errors(lambda: marker)
self.assertEqual([], errors)
self.assertIs(marker, result)
def test_unhandled_error(self):
failures = []
def make_deferred_but_dont_handle():
try:
1/0
except ZeroDivisionError:
f = Failure()
failures.append(f)
defer.fail(f)
result, errors = _spinner.trap_unhandled_errors(
make_deferred_but_dont_handle)
self.assertIs(None, result)
self.assertEqual(failures, [error.failResult for error in errors])
class TestRunInReactor(NeedsTwistedTestCase):
def make_reactor(self):
from twisted.internet import reactor
return reactor
def make_spinner(self, reactor=None):
if reactor is None:
reactor = self.make_reactor()
return _spinner.Spinner(reactor)
def make_timeout(self):
return 0.01
def test_function_called(self):
# run_in_reactor actually calls the function given to it.
calls = []
marker = object()
self.make_spinner().run(self.make_timeout(), calls.append, marker)
self.assertThat(calls, Equals([marker]))
def test_return_value_returned(self):
# run_in_reactor returns the value returned by the function given to
# it.
marker = object()
result = self.make_spinner().run(self.make_timeout(), lambda: marker)
self.assertThat(result, Is(marker))
def test_exception_reraised(self):
# If the given function raises an error, run_in_reactor re-raises that
# error.
self.assertThat(
lambda: self.make_spinner().run(self.make_timeout(), lambda: 1/0),
Raises(MatchesException(ZeroDivisionError)))
def test_keyword_arguments(self):
# run_in_reactor passes keyword arguments on.
calls = []
function = lambda *a, **kw: calls.extend([a, kw])
self.make_spinner().run(self.make_timeout(), function, foo=42)
self.assertThat(calls, Equals([(), {'foo': 42}]))
def test_not_reentrant(self):
# run_in_reactor raises an error if it is called inside another call
# to run_in_reactor.
spinner = self.make_spinner()
self.assertThat(lambda: spinner.run(
self.make_timeout(), spinner.run, self.make_timeout(),
lambda: None), Raises(MatchesException(_spinner.ReentryError)))
def test_deferred_value_returned(self):
# If the given function returns a Deferred, run_in_reactor returns the
# value in the Deferred at the end of the callback chain.
marker = object()
result = self.make_spinner().run(
self.make_timeout(), lambda: defer.succeed(marker))
self.assertThat(result, Is(marker))
def test_preserve_signal_handler(self):
signals = ['SIGINT', 'SIGTERM', 'SIGCHLD']
signals = list(filter(
None, (getattr(signal, name, None) for name in signals)))
for sig in signals:
self.addCleanup(signal.signal, sig, signal.getsignal(sig))
new_hdlrs = list(lambda *a: None for _ in signals)
for sig, hdlr in zip(signals, new_hdlrs):
signal.signal(sig, hdlr)
spinner = self.make_spinner()
spinner.run(self.make_timeout(), lambda: None)
self.assertItemsEqual(new_hdlrs, list(map(signal.getsignal, signals)))
def test_timeout(self):
# If the function takes too long to run, we raise a
# _spinner.TimeoutError.
timeout = self.make_timeout()
self.assertThat(
lambda: self.make_spinner().run(timeout, lambda: defer.Deferred()),
Raises(MatchesException(_spinner.TimeoutError)))
def test_no_junk_by_default(self):
# If the reactor hasn't spun yet, then there cannot be any junk.
spinner = self.make_spinner()
self.assertThat(spinner.get_junk(), Equals([]))
def test_clean_do_nothing(self):
# If there's nothing going on in the reactor, then clean does nothing
# and returns an empty list.
spinner = self.make_spinner()
result = spinner._clean()
self.assertThat(result, Equals([]))
def test_clean_delayed_call(self):
# If there's a delayed call in the reactor, then clean cancels it and
# returns an empty list.
reactor = self.make_reactor()
spinner = self.make_spinner(reactor)
call = reactor.callLater(10, lambda: None)
results = spinner._clean()
self.assertThat(results, Equals([call]))
self.assertThat(call.active(), Equals(False))
def test_clean_delayed_call_cancelled(self):
# If there's a delayed call that's just been cancelled, then it's no
# longer there.
reactor = self.make_reactor()
spinner = self.make_spinner(reactor)
call = reactor.callLater(10, lambda: None)
call.cancel()
results = spinner._clean()
self.assertThat(results, Equals([]))
def test_clean_selectables(self):
# If there's still a selectable (e.g. a listening socket), then
# clean() removes it from the reactor's registry.
#
# Note that the socket is left open. This emulates a bug in trial.
from twisted.internet.protocol import ServerFactory
reactor = self.make_reactor()
spinner = self.make_spinner(reactor)
port = reactor.listenTCP(0, ServerFactory(), interface='127.0.0.1')
spinner.run(self.make_timeout(), lambda: None)
results = spinner.get_junk()
self.assertThat(results, Equals([port]))
def test_clean_running_threads(self):
import threading
import time
current_threads = list(threading.enumerate())
reactor = self.make_reactor()
timeout = self.make_timeout()
spinner = self.make_spinner(reactor)
spinner.run(timeout, reactor.callInThread, time.sleep, timeout / 2.0)
self.assertThat(list(threading.enumerate()), Equals(current_threads))
def test_leftover_junk_available(self):
# If 'run' is given a function that leaves the reactor dirty in some
# way, 'run' will clean up the reactor and then store information
# about the junk. This information can be got using get_junk.
from twisted.internet.protocol import ServerFactory
reactor = self.make_reactor()
spinner = self.make_spinner(reactor)
port = spinner.run(
self.make_timeout(), reactor.listenTCP, 0, ServerFactory(),
interface='127.0.0.1')
self.assertThat(spinner.get_junk(), Equals([port]))
def test_will_not_run_with_previous_junk(self):
# If 'run' is called and there's still junk in the spinner's junk
# list, then the spinner will refuse to run.
from twisted.internet.protocol import ServerFactory
reactor = self.make_reactor()
spinner = self.make_spinner(reactor)
timeout = self.make_timeout()
spinner.run(timeout, reactor.listenTCP, 0, ServerFactory(), interface='127.0.0.1')
self.assertThat(lambda: spinner.run(timeout, lambda: None),
Raises(MatchesException(_spinner.StaleJunkError)))
def test_clear_junk_clears_previous_junk(self):
# If 'run' is called and there's still junk in the spinner's junk
# list, then the spinner will refuse to run.
from twisted.internet.protocol import ServerFactory
reactor = self.make_reactor()
spinner = self.make_spinner(reactor)
timeout = self.make_timeout()
port = spinner.run(timeout, reactor.listenTCP, 0, ServerFactory(),
interface='127.0.0.1')
junk = spinner.clear_junk()
self.assertThat(junk, Equals([port]))
self.assertThat(spinner.get_junk(), Equals([]))
@skipIf(os.name != "posix", "Sending SIGINT with os.kill is posix only")
def test_sigint_raises_no_result_error(self):
# If we get a SIGINT during a run, we raise _spinner.NoResultError.
SIGINT = getattr(signal, 'SIGINT', None)
if not SIGINT:
self.skipTest("SIGINT not available")
reactor = self.make_reactor()
spinner = self.make_spinner(reactor)
timeout = self.make_timeout()
reactor.callLater(timeout, os.kill, os.getpid(), SIGINT)
self.assertThat(
lambda: spinner.run(timeout * 5, defer.Deferred),
Raises(MatchesException(_spinner.NoResultError)))
self.assertEqual([], spinner._clean())
@skipIf(os.name != "posix", "Sending SIGINT with os.kill is posix only")
def test_sigint_raises_no_result_error_second_time(self):
# If we get a SIGINT during a run, we raise _spinner.NoResultError.
# This test is exactly the same as test_sigint_raises_no_result_error,
# and exists to make sure we haven't futzed with state.
self.test_sigint_raises_no_result_error()
@skipIf(os.name != "posix", "Sending SIGINT with os.kill is posix only")
def test_fast_sigint_raises_no_result_error(self):
# If we get a SIGINT during a run, we raise _spinner.NoResultError.
SIGINT = getattr(signal, 'SIGINT', None)
if not SIGINT:
self.skipTest("SIGINT not available")
reactor = self.make_reactor()
spinner = self.make_spinner(reactor)
timeout = self.make_timeout()
reactor.callWhenRunning(os.kill, os.getpid(), SIGINT)
self.assertThat(
lambda: spinner.run(timeout * 5, defer.Deferred),
Raises(MatchesException(_spinner.NoResultError)))
self.assertEqual([], spinner._clean())
@skipIf(os.name != "posix", "Sending SIGINT with os.kill is posix only")
def test_fast_sigint_raises_no_result_error_second_time(self):
self.test_fast_sigint_raises_no_result_error()
def test_fires_after_timeout(self):
# If we timeout, but the Deferred actually ends up firing after the
# time out (perhaps because Spinner's clean-up code is buggy, or
# perhaps because the code responsible for the callback is in a
# thread), then the next run of a spinner works as intended,
# completely isolated from the previous run.
# Ensure we've timed out, and that we have a handle on the Deferred
# that didn't fire.
reactor = self.make_reactor()
spinner1 = self.make_spinner(reactor)
timeout = self.make_timeout()
deferred1 = defer.Deferred()
self.expectThat(
lambda: spinner1.run(timeout, lambda: deferred1),
Raises(MatchesException(_spinner.TimeoutError)))
# Make a Deferred that will fire *after* deferred1 as long as the
# reactor keeps spinning. We don't care that it's a callback of
# deferred1 per se, only that it strictly fires afterwards.
marker = object()
deferred2 = defer.Deferred()
deferred1.addCallback(
lambda ignored: reactor.callLater(0, deferred2.callback, marker))
def fire_other():
"""Fire Deferred from the last spin while waiting for this one."""
deferred1.callback(object())
return deferred2
spinner2 = self.make_spinner(reactor)
self.assertThat(spinner2.run(timeout, fire_other), Is(marker))
def test_suite():
from unittest import TestLoader
return TestLoader().loadTestsFromName(__name__)

View File

@@ -1,327 +0,0 @@
# Copyright (c) 2009-2015 testtools developers. See LICENSE for details.
"""Test suites and related things."""
__all__ = [
'ConcurrentTestSuite',
'ConcurrentStreamTestSuite',
'filter_by_ids',
'iterate_tests',
'sorted_tests',
]
from pprint import pformat
import sys
import threading
import unittest
from extras import safe_hasattr, try_imports
# This is just to let setup.py work, as testtools is imported in setup.py.
unittest2 = try_imports(['unittest2', 'unittest'])
Queue = try_imports(['Queue.Queue', 'queue.Queue'])
import testtools
def iterate_tests(test_suite_or_case):
"""Iterate through all of the test cases in 'test_suite_or_case'."""
try:
suite = iter(test_suite_or_case)
except TypeError:
yield test_suite_or_case
else:
for test in suite:
for subtest in iterate_tests(test):
yield subtest
class ConcurrentTestSuite(unittest2.TestSuite):
"""A TestSuite whose run() calls out to a concurrency strategy."""
def __init__(self, suite, make_tests, wrap_result=None):
"""Create a ConcurrentTestSuite to execute suite.
:param suite: A suite to run concurrently.
:param make_tests: A helper function to split the tests in the
ConcurrentTestSuite into some number of concurrently executing
sub-suites. make_tests must take a suite, and return an iterable
of TestCase-like object, each of which must have a run(result)
method.
:param wrap_result: An optional function that takes a thread-safe
result and a thread number and must return a ``TestResult``
object. If not provided, then ``ConcurrentTestSuite`` will just
use a ``ThreadsafeForwardingResult`` wrapped around the result
passed to ``run()``.
"""
super(ConcurrentTestSuite, self).__init__([suite])
self.make_tests = make_tests
if wrap_result:
self._wrap_result = wrap_result
def _wrap_result(self, thread_safe_result, thread_number):
"""Wrap a thread-safe result before sending it test results.
You can either override this in a subclass or pass your own
``wrap_result`` in to the constructor. The latter is preferred.
"""
return thread_safe_result
def run(self, result):
"""Run the tests concurrently.
This calls out to the provided make_tests helper, and then serialises
the results so that result only sees activity from one TestCase at
a time.
ConcurrentTestSuite provides no special mechanism to stop the tests
returned by make_tests, it is up to the make_tests to honour the
shouldStop attribute on the result object they are run with, which will
be set if an exception is raised in the thread which
ConcurrentTestSuite.run is called in.
"""
tests = self.make_tests(self)
try:
threads = {}
queue = Queue()
semaphore = threading.Semaphore(1)
for i, test in enumerate(tests):
process_result = self._wrap_result(
testtools.ThreadsafeForwardingResult(result, semaphore), i)
reader_thread = threading.Thread(
target=self._run_test, args=(test, process_result, queue))
threads[test] = reader_thread, process_result
reader_thread.start()
while threads:
finished_test = queue.get()
threads[finished_test][0].join()
del threads[finished_test]
except:
for thread, process_result in threads.values():
process_result.stop()
raise
def _run_test(self, test, process_result, queue):
try:
try:
test.run(process_result)
except Exception:
# The run logic itself failed.
case = testtools.ErrorHolder(
"broken-runner",
error=sys.exc_info())
case.run(process_result)
finally:
queue.put(test)
class ConcurrentStreamTestSuite(object):
"""A TestSuite whose run() parallelises."""
def __init__(self, make_tests):
"""Create a ConcurrentTestSuite to execute tests returned by make_tests.
:param make_tests: A helper function that should return some number
of concurrently executable test suite / test case objects.
make_tests must take no parameters and return an iterable of
tuples. Each tuple must be of the form (case, route_code), where
case is a TestCase-like object with a run(result) method, and
route_code is either None or a unicode string.
"""
super(ConcurrentStreamTestSuite, self).__init__()
self.make_tests = make_tests
def run(self, result):
"""Run the tests concurrently.
This calls out to the provided make_tests helper to determine the
concurrency to use and to assign routing codes to each worker.
ConcurrentTestSuite provides no special mechanism to stop the tests
returned by make_tests, it is up to the made tests to honour the
shouldStop attribute on the result object they are run with, which will
be set if the test run is to be aborted.
The tests are run with an ExtendedToStreamDecorator wrapped around a
StreamToQueue instance. ConcurrentStreamTestSuite dequeues events from
the queue and forwards them to result. Tests can therefore be either
original unittest tests (or compatible tests), or new tests that emit
StreamResult events directly.
:param result: A StreamResult instance. The caller is responsible for
calling startTestRun on this instance prior to invoking suite.run,
and stopTestRun subsequent to the run method returning.
"""
tests = self.make_tests()
try:
threads = {}
queue = Queue()
for test, route_code in tests:
to_queue = testtools.StreamToQueue(queue, route_code)
process_result = testtools.ExtendedToStreamDecorator(
testtools.TimestampingStreamResult(to_queue))
runner_thread = threading.Thread(
target=self._run_test,
args=(test, process_result, route_code))
threads[to_queue] = runner_thread, process_result
runner_thread.start()
while threads:
event_dict = queue.get()
event = event_dict.pop('event')
if event == 'status':
result.status(**event_dict)
elif event == 'stopTestRun':
thread = threads.pop(event_dict['result'])[0]
thread.join()
elif event == 'startTestRun':
pass
else:
raise ValueError('unknown event type %r' % (event,))
except:
for thread, process_result in threads.values():
# Signal to each TestControl in the ExtendedToStreamDecorator
# that the thread should stop running tests and cleanup
process_result.stop()
raise
def _run_test(self, test, process_result, route_code):
process_result.startTestRun()
try:
try:
test.run(process_result)
except Exception:
# The run logic itself failed.
case = testtools.ErrorHolder(
"broken-runner-'%s'" % (route_code,),
error=sys.exc_info())
case.run(process_result)
finally:
process_result.stopTestRun()
class FixtureSuite(unittest2.TestSuite):
def __init__(self, fixture, tests):
super(FixtureSuite, self).__init__(tests)
self._fixture = fixture
def run(self, result):
self._fixture.setUp()
try:
super(FixtureSuite, self).run(result)
finally:
self._fixture.cleanUp()
def sort_tests(self):
self._tests = sorted_tests(self, True)
def _flatten_tests(suite_or_case, unpack_outer=False):
try:
tests = iter(suite_or_case)
except TypeError:
# Not iterable, assume it's a test case.
return [(suite_or_case.id(), suite_or_case)]
if (type(suite_or_case) in (unittest.TestSuite,) or unpack_outer):
# Plain old test suite (or any others we may add).
result = []
for test in tests:
# Recurse to flatten.
result.extend(_flatten_tests(test))
return result
else:
# Find any old actual test and grab its id.
suite_id = None
tests = iterate_tests(suite_or_case)
for test in tests:
suite_id = test.id()
break
# If it has a sort_tests method, call that.
if safe_hasattr(suite_or_case, 'sort_tests'):
suite_or_case.sort_tests()
return [(suite_id, suite_or_case)]
def filter_by_ids(suite_or_case, test_ids):
"""Remove tests from suite_or_case where their id is not in test_ids.
:param suite_or_case: A test suite or test case.
:param test_ids: Something that supports the __contains__ protocol.
:return: suite_or_case, unless suite_or_case was a case that itself
fails the predicate when it will return a new unittest.TestSuite with
no contents.
This helper exists to provide backwards compatibility with older versions
of Python (currently all versions :)) that don't have a native
filter_by_ids() method on Test(Case|Suite).
For subclasses of TestSuite, filtering is done by:
- attempting to call suite.filter_by_ids(test_ids)
- if there is no method, iterating the suite and identifying tests to
remove, then removing them from _tests, manually recursing into
each entry.
For objects with an id() method - TestCases, filtering is done by:
- attempting to return case.filter_by_ids(test_ids)
- if there is no such method, checking for case.id() in test_ids
and returning case if it is, or TestSuite() if it is not.
For anything else, it is not filtered - it is returned as-is.
To provide compatibility with this routine for a custom TestSuite, just
define a filter_by_ids() method that will return a TestSuite equivalent to
the original minus any tests not in test_ids.
Similarly to provide compatibility for a custom TestCase that does
something unusual define filter_by_ids to return a new TestCase object
that will only run test_ids that are in the provided container. If none
would run, return an empty TestSuite().
The contract for this function does not require mutation - each filtered
object can choose to return a new object with the filtered tests. However
because existing custom TestSuite classes in the wild do not have this
method, we need a way to copy their state correctly which is tricky:
thus the backwards-compatible code paths attempt to mutate in place rather
than guessing how to reconstruct a new suite.
"""
# Compatible objects
if safe_hasattr(suite_or_case, 'filter_by_ids'):
return suite_or_case.filter_by_ids(test_ids)
# TestCase objects.
if safe_hasattr(suite_or_case, 'id'):
if suite_or_case.id() in test_ids:
return suite_or_case
else:
return unittest.TestSuite()
# Standard TestSuites or derived classes [assumed to be mutable].
if isinstance(suite_or_case, unittest.TestSuite):
filtered = []
for item in suite_or_case:
filtered.append(filter_by_ids(item, test_ids))
suite_or_case._tests[:] = filtered
# Everything else:
return suite_or_case
# XXX: Python 2.6. Replace this with Counter when we drop 2.6 support.
def _counter(xs):
"""Return a dict mapping values of xs to number of times they appear."""
counts = {}
for x in xs:
times = counts.setdefault(x, 0)
counts[x] = times + 1
return counts
def sorted_tests(suite_or_case, unpack_outer=False):
"""Sort suite_or_case while preserving non-vanilla TestSuites."""
# Duplicate test id can induce TypeError in Python 3.3.
# Detect the duplicate test ids, raise exception when found.
seen = _counter(case.id() for case in iterate_tests(suite_or_case))
duplicates = dict(
(test_id, count) for test_id, count in seen.items() if count > 1)
if duplicates:
raise ValueError(
'Duplicate test ids detected: %s' % (pformat(duplicates),))
tests = _flatten_tests(suite_or_case, unpack_outer=unpack_outer)
tests.sort()
return unittest.TestSuite([test for (sort_key, test) in tests])

View File

@@ -1,33 +0,0 @@
# Copyright (c) 2016 testtools developers. See LICENSE for details.
"""Support for testing code that uses Twisted."""
__all__ = [
# Matchers
'succeeded',
'failed',
'has_no_result',
# Running tests
'AsynchronousDeferredRunTest',
'AsynchronousDeferredRunTestForBrokenTwisted',
'SynchronousDeferredRunTest',
'CaptureTwistedLogs',
'assert_fails_with',
'flush_logged_errors',
]
from ._matchers import (
succeeded,
failed,
has_no_result,
)
from ._runtest import (
AsynchronousDeferredRunTest,
AsynchronousDeferredRunTestForBrokenTwisted,
SynchronousDeferredRunTest,
CaptureTwistedLogs,
assert_fails_with,
flush_logged_errors,
)

View File

@@ -1,112 +0,0 @@
# Copyright (c) testtools developers. See LICENSE for details.
"""Utilities for Deferreds."""
from functools import partial
from testtools.content import TracebackContent
class DeferredNotFired(Exception):
"""Raised when we extract a result from a Deferred that's not fired yet."""
def __init__(self, deferred):
msg = "%r has not fired yet." % (deferred,)
super(DeferredNotFired, self).__init__(msg)
def extract_result(deferred):
"""Extract the result from a fired deferred.
It can happen that you have an API that returns Deferreds for
compatibility with Twisted code, but is in fact synchronous, i.e. the
Deferreds it returns have always fired by the time it returns. In this
case, you can use this function to convert the result back into the usual
form for a synchronous API, i.e. the result itself or a raised exception.
As a rule, this function should not be used when operating with
asynchronous Deferreds (i.e. for normal use of Deferreds in application
code). In those cases, it is better to add callbacks and errbacks as
needed.
"""
failures = []
successes = []
deferred.addCallbacks(successes.append, failures.append)
if len(failures) == 1:
failures[0].raiseException()
elif len(successes) == 1:
return successes[0]
else:
raise DeferredNotFired(deferred)
class ImpossibleDeferredError(Exception):
"""Raised if a Deferred somehow triggers both a success and a failure."""
def __init__(self, deferred, successes, failures):
msg = ('Impossible condition on %r, got both success (%r) and '
'failure (%r)')
super(ImpossibleDeferredError, self).__init__(
msg % (deferred, successes, failures))
def on_deferred_result(deferred, on_success, on_failure, on_no_result):
"""Handle the result of a synchronous ``Deferred``.
If ``deferred`` has fire successfully, call ``on_success``.
If ``deferred`` has failed, call ``on_failure``.
If ``deferred`` has not yet fired, call ``on_no_result``.
The value of ``deferred`` will be preserved, so that other callbacks and
errbacks can be added to ``deferred``.
:param Deferred[A] deferred: A synchronous Deferred.
:param Callable[[Deferred[A], A], T] on_success: Called if the Deferred
fires successfully.
:param Callable[[Deferred[A], Failure], T] on_failure: Called if the
Deferred fires unsuccessfully.
:param Callable[[Deferred[A]], T] on_no_result: Called if the Deferred has
not yet fired.
:raises ImpossibleDeferredError: If the Deferred somehow
triggers both a success and a failure.
:raises TypeError: If the Deferred somehow triggers more than one success,
or more than one failure.
:return: Whatever is returned by the triggered callback.
:rtype: ``T``
"""
successes = []
failures = []
def capture(value, values):
values.append(value)
return value
deferred.addCallbacks(
partial(capture, values=successes),
partial(capture, values=failures),
)
if successes and failures:
raise ImpossibleDeferredError(deferred, successes, failures)
elif failures:
[failure] = failures
return on_failure(deferred, failure)
elif successes:
[result] = successes
return on_success(deferred, result)
else:
return on_no_result(deferred)
def failure_content(failure):
"""Create a Content object for a Failure.
:param Failure failure: The failure to create content for.
:rtype: ``Content``
"""
return TracebackContent(
(failure.type, failure.value, failure.getTracebackObject()),
None,
)

View File

@@ -1,21 +0,0 @@
# Copyright (c) testtools developers. See LICENSE for details.
#
# TODO: Move this to testtools.twistedsupport. See testing-cabal/testtools#202.
from fixtures import Fixture, MonkeyPatch
class DebugTwisted(Fixture):
"""Set debug options for Twisted."""
def __init__(self, debug=True):
super(DebugTwisted, self).__init__()
self._debug_setting = debug
def _setUp(self):
self.useFixture(
MonkeyPatch('twisted.internet.defer.Deferred.debug',
self._debug_setting))
self.useFixture(
MonkeyPatch('twisted.internet.base.DelayedCall.debug',
self._debug_setting))

View File

@@ -1,184 +0,0 @@
# Copyright (c) testtools developers. See LICENSE for details.
"""Matchers that operate on synchronous Deferreds.
A "synchronous" Deferred is one that does not need the reactor or any other
asynchronous process in order to fire.
Normal application code can't know when a Deferred is going to fire, because
that is generally left up to the reactor. Unit tests can (and should!) provide
fake reactors, or don't use the reactor at all, so that Deferreds fire
synchronously.
These matchers allow you to make assertions about when and how Deferreds fire,
and about what values they fire with.
"""
from testtools.compat import _u
from testtools.matchers import Mismatch
from ._deferred import failure_content, on_deferred_result
class _NoResult(object):
"""Matches a Deferred that has not yet fired."""
@staticmethod
def _got_result(deferred, result):
return Mismatch(
_u('No result expected on %r, found %r instead'
% (deferred, result)))
def match(self, deferred):
"""Match ``deferred`` if it hasn't fired."""
return on_deferred_result(
deferred,
on_success=self._got_result,
on_failure=self._got_result,
on_no_result=lambda _: None,
)
_NO_RESULT = _NoResult()
def has_no_result():
"""Match a Deferred that has not yet fired.
For example, this will pass::
assert_that(defer.Deferred(), has_no_result())
But this will fail:
>>> assert_that(defer.succeed(None), has_no_result())
Traceback (most recent call last):
...
File "testtools/assertions.py", line 22, in assert_that
raise MismatchError(matchee, matcher, mismatch, verbose)
testtools.matchers._impl.MismatchError: No result expected on <Deferred at ... current result: None>, found None instead
As will this:
>>> assert_that(defer.fail(RuntimeError('foo')), has_no_result())
Traceback (most recent call last):
...
File "testtools/assertions.py", line 22, in assert_that
raise MismatchError(matchee, matcher, mismatch, verbose)
testtools.matchers._impl.MismatchError: No result expected on <Deferred at ... current result: <twisted.python.failure.Failure <type 'exceptions.RuntimeError'>>>, found <twisted.python.failure.Failure <type 'exceptions.RuntimeError'>> instead
"""
return _NO_RESULT
class _Succeeded(object):
"""Matches a Deferred that has fired successfully."""
def __init__(self, matcher):
"""Construct a ``_Succeeded`` matcher."""
self._matcher = matcher
@staticmethod
def _got_failure(deferred, failure):
deferred.addErrback(lambda _: None)
return Mismatch(
_u('Success result expected on %r, found failure result '
'instead: %r' % (deferred, failure)),
{'traceback': failure_content(failure)},
)
@staticmethod
def _got_no_result(deferred):
return Mismatch(
_u('Success result expected on %r, found no result '
'instead' % (deferred,)))
def match(self, deferred):
"""Match against the successful result of ``deferred``."""
return on_deferred_result(
deferred,
on_success=lambda _, value: self._matcher.match(value),
on_failure=self._got_failure,
on_no_result=self._got_no_result,
)
def succeeded(matcher):
"""Match a Deferred that has fired successfully.
For example::
fires_with_the_answer = succeeded(Equals(42))
deferred = defer.succeed(42)
assert_that(deferred, fires_with_the_answer)
This assertion will pass. However, if ``deferred`` had fired with a
different value, or had failed, or had not fired at all, then it would
fail.
Use this instead of
:py:meth:`twisted.trial.unittest.SynchronousTestCase.successResultOf`.
:param matcher: A matcher to match against the result of a
:class:`~twisted.internet.defer.Deferred`.
:return: A matcher that can be applied to a synchronous
:class:`~twisted.internet.defer.Deferred`.
"""
return _Succeeded(matcher)
class _Failed(object):
"""Matches a Deferred that has failed."""
def __init__(self, matcher):
self._matcher = matcher
def _got_failure(self, deferred, failure):
# We have handled the failure, so suppress its output.
deferred.addErrback(lambda _: None)
return self._matcher.match(failure)
@staticmethod
def _got_success(deferred, success):
return Mismatch(
_u('Failure result expected on %r, found success '
'result (%r) instead' % (deferred, success)))
@staticmethod
def _got_no_result(deferred):
return Mismatch(
_u('Failure result expected on %r, found no result instead'
% (deferred,)))
def match(self, deferred):
return on_deferred_result(
deferred,
on_success=self._got_success,
on_failure=self._got_failure,
on_no_result=self._got_no_result,
)
def failed(matcher):
"""Match a Deferred that has failed.
For example::
error = RuntimeError('foo')
fails_at_runtime = failed(
AfterPreprocessing(lambda f: f.value, Equals(error)))
deferred = defer.fail(error)
assert_that(deferred, fails_at_runtime)
This assertion will pass. However, if ``deferred`` had fired successfully,
had failed with a different error, or had not fired at all, then it would
fail.
Use this instead of
:py:meth:`twisted.trial.unittest.SynchronousTestCase.failureResultOf`.
:param matcher: A matcher to match against the result of a failing
:class:`~twisted.internet.defer.Deferred`.
:return: A matcher that can be applied to a synchronous
:class:`~twisted.internet.defer.Deferred`.
"""
return _Failed(matcher)

Some files were not shown because too many files have changed in this diff Show More