This document defines principles & practices to support the creation of useful and usable conformance test-suites. (Note: while much of its contents are applicable to other forms of testing, the scope of this revision of the guidelines is limited to conformance testing.)
Potential ujsers of the test suite need to know whether this test suite applies to them, the extent to which they can rely on it, and where they might need to focus additional testing efforts.
Specify what specifications are covered, what testing strategy was adopted.
Users need to know what is covered and what is not. It must be possible to map individual tests back to the specification; if a test fails, the user of the test suite must understand what portion of the implementation is at fault.
Good practice: Assertion lists are an effective way of documenting tests and mapping them back to the spec.
If test results are not repeatable and reproducible they cannot be relied upon, and they cannot be compared with other execution results.
The components of a particular revision of the test suite must be unambiguously identified. It is not sufficient to point users to a web site that is randomly updated and that contains an amorphous collection of test materials. Test materials must be packaged together into a "test suite" and published with a version number. The test suite must contain documentation that describes its contents and explains how to use it.
It must be possible to determine what tests must be executed for a particular implementation, allowing non-applicable tests to be filtered out.
Best practice: defining metadata enables filtering.
The test suite documentation must clearly explain how to execute the tests.
[Requirements 2.2 and 2.3 imply that two different users will execute the same tests in the same manner on a particular implementation.]
Best Practice: Either provide a test-harness and supporting tools, libraries, framework, or provide sufficient metadata and documentation to allow a test harness to be constructed.
Tests should report status (passed, failed, not run, etc. - see EARL categories from previous draft) in an unambigous and consistent manner.
Best Practice: If possible, tests should report what went wrong (what they were expecting, and what happened), as an aid to debugging the problem.
Test suites must evolve, as problems are identified and fixed, as coverage is increased, or to address revisions and errata of applicable specifications.
Plan for multiple releases of the test suite. Ideally, a new version of the test suite should be released for each revision/errata of the specification. Version numbers should be supplied. Users should understand which version of the test suite is appropriate for a particular implementation.
Best practice: implement a formal bug-tracking or issue-tracking process to manage bug-reports.Users must be provided with a formal channel for reporting problems in the test materials (tests, test harness, documentation). Note that problems reported against the test suite may reveal problems in the specification. Problems may be addressed by:
Best practice: implement a formal bug-tracking or issue-tracking process to manage bug-reports.
Best practice: Patching an existing test-suite is difficult; re-releasing the entire test-suite, even if the changes are minor, might be the simplest and least confusing way to release updates.
Treat test development like product development - it is (or should be) a formal engineering process.
For the highest-quality test suite: