Testing Neutron
=============================================================

Overview

    The unit tests are meant to cover as much code as possible and should
    be executed without the service running. They are designed to test
    the various pieces of the neutron tree to make sure any new changes
    don't break existing functionality.

    The functional tests are intended to validate actual system
    interaction.  Mocks should be used sparingly, if at all.  Care
    should be taken to ensure that existing system resources are not
    modified and that resources created in tests are properly cleaned
    up.

Running tests

    There are two mechanisms for running tests: run_tests.sh and tox.
    Before submitting a patch for review you should always ensure all
    test pass; a tox run is triggered by the jenkins gate executed on
    gerrit for each patch pushed for review.

    With both mechanisms you can either run the tests in the standard
    environment or create a virtual environment to run them in.

    By default after running all of the tests, any pep8 errors
    found in the tree will be reported.

Running individual tests

    For running individual test modules or cases, you just need to pass
    the dot-separated path to the module you want as an argument to it.

    For executing a specific test case, specify the name of the test case
    class separating it from the module path with a colon.

    For example, the following would run only the JSONV2TestCase tests from
    neutron/tests/unit/test_api_v2.py:

      $ ./run_tests.sh neutron.tests.unit.test_api_v2:JSONV2TestCase

    or

      $ ./tox neutron.tests.unit.test_api_v2:JSONV2TestCase

Adding more tests

    Neutron has a fast growing code base and there is plenty of areas that
    need to be covered by unit and functional tests.

    To get a grasp of the areas where tests are needed, you can check
    current coverage by running:

    $ ./run_tests.sh -c

Development process

    It is expected that any new changes that are proposed for merge
    come with tests for that feature or code area. Ideally any bugs
    fixes that are submitted also have tests to prove that they stay
    fixed!  In addition, before proposing for merge, all of the
    current tests should be passing.

Debugging

    By default, calls to pdb.set_trace() will be ignored when tests
    are run.  For pdb statements to work, invoke run_tests as follows:

    $ ./run_tests.sh -d [test module path]

    It's possible to debug tests in a tox environment:

    $ tox -e venv -- python -m testtools.run [test module path]

    Tox-created virtual environments (venv's) can also be activated
    after a tox run and reused for debugging:

    $ tox -e venv
    $ . .tox/venv/bin/activate
    $ python -m testtools.run [test module path]

    Tox packages and installs the neutron source tree in a given venv
    on every invocation, but if modifications need to be made between
    invocation (e.g. adding more pdb statements), it is recommended
    that the source tree be installed in the venv in editable mode:

    # run this only after activating the venv
    $ pip install --editable .

    Editable mode ensures that changes made to the source tree are
    automatically reflected in the venv, and that such changes are not
    overwritten during the next tox run.

Post-mortem debugging

    Setting OS_POST_MORTEM_DEBUG=1 in the shell environment will ensure
    that pdb.post_mortem() will be invoked on test failure:

    $ OS_POST_MORTEM_DEBUG=1 ./run_tests.sh -d [test module path]