Copyright © 2009 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply.
This document describes a methodology that was used successfully to define, extract and maintain test assertions from a W3C specification in synchronization with its test suite.
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.
This document was published by the Mobile Web Initiative Test Suites Working Group as an Editor's Draft. If you wish to make comments regarding this document, please send them to public-mwts@w3.org (subscribe, archives). All feedback is welcome.
Publication as a Editor's Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
The Mobile Web Test Suites Working Group worked with the editors of Widgets family of specifications in the Web Applications Working Group on a methodology that has allowed to automate the extraction of test assertions from their specifications, and their maintenance in parallel to the building of the test suite.
Most W3C specifications use the RFC 2119 [RFC2119] keywords (must, should, may, etc.) to indicate the level of requirement that is imposed on an implementation of the specification.
Taking an example from the Widgets Packaging and Configuration [WIDGETS] specification:
If a user agent encounters a file matching a file name given in the file name column of the default start files table in an arbitrary folder, then the user agent must treat that file as an arbitrary file.
After following the definitions given in the specification, the conformance requirement above can be easily turned into a testable assertion (which itself can be used to derive test cases):
Upon encountering a file matching either
index.htm
,index.html
,index.svg
,index.xhtml
, orindex.xht
in a folder different from the root directory and thelocales
directory, a user agent does not treat this file as a default start file, a default icon, or a configuration file.
Given their similarities, a conformance requirement can be used as an equivalent to a testable assertion, provided it contains all the necessary information to make it testable (see The Structure of a Test Assertion in the Test Assertions Guidelines [OASIS-TAG]), namely:
an identification of the product that is supposed to follow the requirement — in this case, the “user agent”,
a clear definition of the prerequisite in which the requirement apply — in this case “encountering a file matching”,
a clear definition of the behaviour the product is to apply — in this case, “treating a file as an arbitrary file”.
There are common mistakes in writing conformance requirements that makes them much harder to use as testable assertions, including:
creating conformance requirements for products that don't have behaviour, e.g. “a configuration file must be well-formed” — this cannot be tested since it doesn't say what outcome should happen on that condition,
using the passive voice for describing the behaviour, e.g. “an invalid configuration file must be ignored” — this hides what product is supposed to follow the prescribed behaviour,
using ill-defined behaviours, e.g. “a user agent must reject a widget without a configuration file” without defining “reject” — this again makes it impossible to define the outcome of the testable assertion.
The first step in turning the specification into a useful source of testable assertions is logically to fix as many of these problems as possible. In the process of doing so, it is possible to use specific mark-up conventions that can facilitate the creation and maintenance of the test suite.
To make it easy to create test cases based on a given test assertion, that test assertion minimally needs:
to have a unique identifier — this allows to group test cases by test assertions, talk about a specific test assertion, assess the coverage of the test suite among other things;
to be attached to a specific conformance product — this will be defined how the test cases are built, based on how the product is supposed to operate;
to define a level of requirement for conformance (mandatory, recommended, optional);
to contain the pre-requisite and the expected behaviour attached to the assertion, and as much context on the meaning of the terms in use;
to link back to the specification — where the tester can get more context on the definitions and the spirit of the assertion when the letter of it is not enough.
Using mark-up conventions make it possible to automatically extract the test assertions from the specifications with the required information; re-using the example of test assertion above, the following mark-up highlights the important information:
<p id="ta-RRZxvvTFHx">
If a user agent encounters a<a href="#file">
file</a>
matching a file name given in the file name column of the<a href="#default-start-files-table">
default start files table</a>
in an<a href="#arbitrary">
arbitrary</a> <a href="#folder">
folder</a>
, then<a class="product-ua" href="#user-agent">
user agent</a> <em class="ct">
must</em>
treat that file as an<a href="#arbitrary">
arbitrary</a>
file.</p>
That mark-up achieves the following results:
It
encompasses all the useful information into a single HTML
element — the paragraph enclosed in
<p>
tags.
The
level of requirement is marked up by an emphasis element
(<em>
) with a
class
defined to
ct
; this mark-up
convention also allows to determine if a given paragraph of
the specification contains a requirement or
not.
Each
assertion is uniquely identified through the id
attribute on the paragraph
element; the unique identifier starts by convention with
ta-
, and its uniqueness
is ensured by the HTML validity requirements of
the document.
This
same id attribute allows to link back to the specification
exactly to the point where the requirement is made; for
instance, the assertion above can be found at
http://www.w3.org/TR/2009/WD-widgets-20091029/#ta-RRZxvvTFHx
The
conformance product to which the requirement applies is
marked up with a class attribute set to one of the
predefined values — in the case of the
Widgets Packaging and Configuration
specification, product-ua
and product-cc
,
respectively for user agents and conformance
checkers.
The
requirement contains internal links to the definition of the
terms in use; for instance, the term “file” links to the
definition of that term in the context of the specification
with <a
href="#file">
.
There are a number of technologies that can be used to filter an HTML document based on the kind of mark-up conventions described above.
In the course of the co-development of the test suite and the specification, the group and the specification editors used two of these technologies:
XSLT, a transformation language that can be applied to any XML language (including XHTML); using XSLT allows to generate a new HTML document based on the original document, with as much filtering, modification and reorganization as needed;
JavaScript, which can operate on HTML documents through the Document Object Model.
The original extraction of test assertions was made through an XSLT style sheet, that allowed to generate a static list of test assertions that served as the first basis for the review of the testability of the specification.
Over time, the extraction system was switched to the JavaScript-based approach, allowing for an easier maintenance of the test plan [WIDGETS-TESTS] where the test assertions could be automatically obtained from the specification, while keeping the rest of the test plan easy to update.
Once the test assertions have been identified and extracted, the work of creating test cases for each test assertion is vastly simplified.
A test writer can look at a given test assertion, create a test case that matches the pre-requisites set in the assertion, and document the expected outcome described by the required behaviour.
Each test case can be associated to a given test assertion ; later on, when running the test suite and finding test cases that fail, it allows to identify very quickly the assertion behind it that was failed, and thus evaluate which of the implementation, the test case or the specification is wrong.
To maintain the association between test cases and test assertions, a simple XML file was set up:
<testsuite for="http://www.w3.org/TR/widgets/"> <test id="aa" for="ta-ACCJfDGwDQ" src="test-cases/ta-ACCJfDGwDQ/000/aa.wgt"> Tests that the UA rejects configuration documents that don't have correct widget element at the root. To pass, the UA must treat this as an invalid widget (the root element is not widget).></test> </testsuite>
The test element encompasses data about a single test case:
a
unique identifier for the test case, set in the id
attribute,
the
targeted test assertion set in the for
attribute,
the
file used to test the runtime engine, set in the src
attribute,
and the expected outcome of the test described as the textual content of the element.
This XML file allows to generate the final round of packaging and information needed for the test suite:
its content is integrated in the test plan with JavaScript to attach test cases to the previously extracted test assertions;
it allows to quickly assess the coverage of the test suite by finding which test assertions don't have matching test cases;
the list of test cases can be used to create simple test harnesses for widget runtime engines;
the same list is used to generate an interoperability report comparing the results of running the test cases for various run time engines [WIDGETS-INTEROP].
While the methodology described above uses three separate steps (making the specification testable, marking it up and linking test assertions to test cases), these steps don't have to be applied sequentially, and have shown in practice to work best in a more integrated and iterative process.
This methodology has proved very successful for the Widgets Packaging and Configuration specification, and is now being applied to the other Widgets specifications developed by the Web Applications Working Group.
No normative references.