Testing setup#

Now let's take a moment (this chapter) to understand how Plone tests work and how they're set up.

To run tests in Plone you need three things:

  • a test runner

  • a testing setup

  • tests

The plonecli tool already creates these things for us, so most of the configuration has already been done, including some basic tests to use as a starting point.

Note

If you want to add tests from scratch into your addon or you want to update your testing environment, the best way is to create a new package using plonecli with the same namespace, and then copy the generated files that you need into your addon.

Test runner#

A test runner is a script that collects all available tests and executes them.

Plone's test runner is a zope.testing script called "test", generated by a buildout recipe.

If we inspect the base.cfg file, we see a test part that uses this recipe:

[test]
recipe = zc.recipe.testrunner
eggs = ${instance:eggs}
initialization =
    os.environ['TZ'] = 'UTC'
defaults = ['-s', 'plonetraining.testing', '--auto-color', '--auto-progress']

We are setting some defaults when running tests: - -s plonetraining.testing means that we are executing all tests from a specific test suite (plonetraining.testing) - `` --auto-color`` generates color console output with green and red reports for succeeding and failing tests - --auto-progress outputs the progress on the console

Note

In the previous chapter, we used plonecli to run tests, but that command is only a wrapper for the bin/test script.

Note

If we have a project with several addons and we want to test them all, we could add a similar configuration to our project's buildout.

In the eggs option we could list all packages that we want to test.

If we remove '-s', 'plonetraining.testing' from the defaults, all tests from packages listed in the eggs option will be run by default.

Testing setup#

The testing setup for a Plone package is in a file called testing.py. It looks like this:

# -*- coding: utf-8 -*-
from plone.app.contenttypes.testing import PLONE_APP_CONTENTTYPES_FIXTURE
from plone.app.robotframework.testing import REMOTE_LIBRARY_BUNDLE_FIXTURE
from plone.app.testing import applyProfile
from plone.app.testing import FunctionalTesting
from plone.app.testing import IntegrationTesting
from plone.app.testing import PloneSandboxLayer
from plone.testing import z2

import plonetraining.testing


class PlonetrainingTestingLayer(PloneSandboxLayer):

    defaultBases = (PLONE_APP_CONTENTTYPES_FIXTURE,)

    def setUpZope(self, app, configurationContext):
        # Load any other ZCML that is required for your tests.
        # The z3c.autoinclude feature is disabled in the Plone fixture base
        # layer.
        import plone.restapi

        self.loadZCML(package=plone.restapi)
        self.loadZCML(package=plonetraining.testing)

    def setUpPloneSite(self, portal):
        applyProfile(portal, 'plonetraining.testing:default')


PLONETRAINING_TESTING_FIXTURE = PlonetrainingTestingLayer()


PLONETRAINING_TESTING_INTEGRATION_TESTING = IntegrationTesting(
    bases=(PLONETRAINING_TESTING_FIXTURE,),
    name='PlonetrainingTestingLayer:IntegrationTesting',
)


PLONETRAINING_TESTING_FUNCTIONAL_TESTING = FunctionalTesting(
    bases=(PLONETRAINING_TESTING_FIXTURE,),
    name='PlonetrainingTestingLayer:FunctionalTesting',
)


PLONETRAINING_TESTING_ACCEPTANCE_TESTING = FunctionalTesting(
    bases=(
        PLONETRAINING_TESTING_FIXTURE,
        REMOTE_LIBRARY_BUNDLE_FIXTURE,
        z2.ZSERVER_FIXTURE,
    ),
    name='PlonetrainingTestingLayer:AcceptanceTesting',
)

There are three main pieces:

  • a layer definition (PlonetrainingTestingLayer): a layer setup, a list of presets for testing environment (called fixtures), and making the packages available in the testing environment.

  • a package fixture definition: this is the base setup for testing our package (PLONETRAINING_TESTING_FIXTURE).

  • different test types: depending on our needs, we can use different test types, such as functional or integration tests.

plone.app.testing has a set of base Layers and fixtures that we use as starting point.

Note

We need to manually load all ZCML dependencies because autoinclude is disabled in plone.app.testing to preserve isolation.

Setup and teardown hooks#

plone.app.testing provides a set of hooks that we can use to perform several actions before a test or test suite runs (using setUp) or after it runs (using tearDown).

In testing.py file we usually use these hooks:

  • setUpZope(self, app, configurationContext): to configure Zope (mostly importing ZCML profiles from the packages that we need to test) and its dependencies

  • setUpPloneSite(self, portal): to configure the actual Plone site, for example installing the addon that we are going to test.

  • tearDownPloneSite(self, portal): to clean up Plone when all tests end.

  • tearDownZope(self, app): to clean up Zope when all tests end.

These will be called every time a test case uses that layer.

In each test case, we could have the following methods:

  • setUp(self)

  • tearDown(self)

We use these methods to define some common variables (for example, to access the portal object or the request), to pre-populate the site with content or to change permissions.

These methods are called for every single test.

Tests#

Tests are located in the tests folder.

In this folder you can create as many tests as you want, in one or more files. The only requirement is that the filenames should start with test_.

Tests can be grouped into test cases depending on the test type (unit, functional, integration or robot) and on the functionality that they are testing.

A test case defines which layer should be used, can set up the environment before test execution (using the setUp method) and can perform some actions after all tests have been executed (with the tearDown method).

plonecli creates a basic test case for testing that the addon installs correctly and registers its browserlayer.

Assertions#

A test is a method that does something (such as calling a method, instantiating a class, or executing more complex behavior) and checks that the result is what we expected.

These checks are made by assertions. They are statements that check if a generated value is the same as the expected one.

If an assertion fails in a test, the test itself fails. We can include as many assertions we want in a single test, and they must all succeed.

There are different types of assertions that we can use. For example:

assertEqual(a, b)
    a == b

assertTrue(x)
    bool(x) is True

assertFalse(x)
    bool(x) is False

assertIsNotNone(x)
    x is not None

assertIn(a, b)
    a in b

assertIsInstance(a, b)
    isinstance(a, b)

assertRaises(exc, fun, *args, **kwds)
    fun(*args, **kwds) raises exc

assertGreater(a, b)
    a > b

assertGreaterEqual(a, b)
    a >= b

Each assertion also has a "not" version:

assertNotEqual(a, b)
    a != b

assertNotIn(a, b)
    a not in b