This document is a guide to defining and creating test assertions for specifications. Its purpose is to help you understand what test assertions are, why create them, and most importantly, how to create them.
This document is addressed to specification writers, World Wide Web Consortium (W3C) Working Group (WG) chairs, software developers, W3C WG members, and others interested in testing and quality issues. There are both W3C and non-W3C references in this document, and so it is hoped that non-W3C members will find this document useful and applicable as well. This document is meant to appeal to the novice as well as experienced creator of test assertions. After this section (introduction), sections are presented on: test assertions definitions, benefits/rationale of test assertions, methods for building/using test assertions, examples of test assertions, and finally some general guiding principles for test assertion development. After reading this document, you should have a basic understanding of test assertions and their importance, and be ready to create test assertions for your purposes.
This is a document produced by the W3C Quality Assurance Interest Group (QAIG). Consequently, this document is intended to be a "companion" document to the following other W3C QA documents: QA Framework: Specification Guidelines (QA Spec GL), Test Development FAQ, Test Case Metadata Note, and the QA Wiki: Testable or not? information. These other W3C QA documents were used as resources in preparing this document.
There are several slightly varying definitions of "test assertions" and similar concepts, depending on the approach/point of view of the definer. These definitions (as well as similar concepts) are listed following for completeness. However, it may be more important to emphasize the "common" functionality implied/inherent in these varying definitions than to concentrate on the nuances of the differences themselves?
For example, QA Framework: Specification Guidelines states: "A test assertion is a measurable or testable statement of behavior, action, or condition. It is contained within or derived from the specification's requirements and provides a normative foundation from which one or more test cases can be built."
Patrick Curran, of the QAIG, states that "Test assertions are simple, unambiguous, declarative statements of required functionality. They are either contained within or derived from the specification".
Another definition of "test assertion" is at the Unisoft Site, as in:
"A test assertion (also known as a test description), details an individual unit of functionality or behavior derived from statements contained in the API specification to be tested."
Still another definition of test assertion is at the Understanding the WS-I Test Tools Site, as in:
"A test assertion is a testable expression of one or more requirements in the Basic Profile."
The W3C QA Glossary defines "test assertion" as " a set of premises that are known to be true by definition in the specification."
Finally, a definition of "assert" (from Webster's New Riverside University Dictionary, 1984) is "to state or express positively". Thus, by extension, a "test assertion" might be " positive statement that can be tested"?
A possible issue in defining "test assertion" may be that there are different terms used for similar concepts in multiple documents related to testing. For example, the term "conformance assertion" is used in BioAPI CTS Release">the BioAPI Release Notes; is this the same concept as a "test assertion" here?
In WHQL Test Specification FAQ, "test assertion" is mentioned as being included in a "test specification"; is "test specification" the same as "test assertion" as defined previously? In the BentoWeb Project "test assertion" is part of the "test purpose" ; is 'test purpose" the same as "test assertion" as defined previously?
Finally, the term "test assertion" is not used at all in IEEE 829: 1998 Standard for Software Test Documentation , which documents the testing of software, and specifies eight stages in the documentation process (test plan, test design specification, test case specification, test procedure specification, test item transmittal report, test log, test incident report, and test summary report). The first two of the stages (which seem to deal with the "test assertion" concept?) are:
"Test Plan: A detail of how the test will proceed, who will do the testing, what will be tested, in how much time the test will take place, and to what quality level the test will be performed", and
"Test Design Specification: A detail of the test conditions and the expected outcome. This document also includes details of how a successful test will be recognized."
Are "test plan" and "test design specification" the same concept as "test assertion" as defined previously?
Goals of successful specification development may be: (1) widespread use/adoption, and (2) interoperablity of implementations . The use of test assertions may advance these goals by: (1) benefitting a specification's quality, and (2) improving the quality of tests of conformance to that specification. These benefits are complementary because test assertions represent an important connection from the text of a specification to a test suite that will verify a specification's conformance. This is because test assertions are derived from the requirements of a specification, and connects a specification to the tests pertaining to that specification.
In terms of specification quality, test assertions can provide invaluable insights on the meaning of conformance to a specification. It should be possible to identify test assertions within a specification in order for a specification's quality to be assured. Otherwise, implementors may interpret specification requirements differently, leading to incompatible implementations. It may be difficult to verify that a specification is "testable" (see Section 4) without test assertions. Thus, the motivation for creation of test assertions is to promote widespread implementation and adoption of a specification's technology.
To be "testable", specifications need to be verifiable, or correct. Thus, unambiguous semantics in a specification are desirable so that there is a precise idea of behavior, offering guidelines for implementation of a specification. For specifications, test assertions may help to clarify unambiguous semantics in illustrating how conformance requirements may be satisfied.
Test assertions facilitate the development of consistent, complete specifications. Developing or extracting test assertions helps uncover inconsistencies, gaps, and non-testable statements in the specification. The activity of developing test assertions may identify and force clarification of ambiguities, contradictions, and even omissions in a specification, particularly early in a specification's development process. Thus the use of test assertions can provide early feedback to the editors regarding areas that need attention.
In terms of improving quality of tests/test suites, test assertions may be important because they can be an important foundation to the testing process of a specification. Use of test assertions can promote the early development of tests; since each test assertion may be broken down into one or multiple tests, test assertions can be a starting point for developing conformance tests, or a conformance test suite, that is widely used and respected, and is of demonstrable "quality". In turn, conformance tests should address test assertions from which they are derived. Test assertions may be valuable as as input to test development efforts at the beginning of the test development process. In summary, without test assertions, test developers may not be able to develop a useful test suite.
There is anecdotal evidence to support the benefits mentioned previously of test assertions in satisfying the previously-stated goals. First, the QA Framework: Specification Guidelines (QA Spec GL) Good Practice 12 states "write test assertions"; this veracity of this good pracite was confirmed in the review process for progression of this document to Recommendation status.
Second, the use of test assertions in testing seems to be somewhat widespead (per examples following), which anecdotally would seem to support the efficacy of their use in testing.
Specifically, for XForms 1.0 , a benefit specifically mentioned in the context of response to a survey was that the process of generating the assertions identified overly vague language in places. It was also specifically mentioned in the context of another response to this same survey that identifying assertions before the progression of a specification to W3C Candidate Recommendation status was advantageous.
In a separate response to the same survey for VoiceXML and SSML, a form-based process used to track test assertions was determined to be a key aspect for efficient management of the development of the implementation reports for these two technologies.
Cascading Style Sheets (CSS) is widespread, and test assertions were used in the preparation of the CSS1 Test Suite. Similarly, web services, as in Web Services Interoperability Organization Basic Profile 1.1 Test Assertions Version 1.1 are becoming more popular, and test assertions (see previous reference) were used in evaluating the quality of Web services specifications.
Test assertions can take different forms, but it may be more beneficial to notice the commonalities (or common functionalities implied in these different forms) than to concentrate on the differences in representations.
In general, test assertions are statements, processable by humans or machines.
Test assertions may be composed of (implicitly or explicitly) several different parts, or "functions", which may be described generally as: "precondition", "main body", and "postcondition".
A "pre-condition" establishes the context under which a specification requirement applies. Since testability of a test assertion may be relative to the context in which the test assertion is used, a test assertion can be testable in one context, and not testable in another context. Some examples of "precondition" follow: an authoring tools test assertion relating to accessibility of preview modes would only apply if the tool contained a preview mode, or a Web Content Accessibility Guidelines (WCAG) 2.0 success criterion (test assertion) dealing with content not violating the Red Flash Threshold would only apply to flashing content A precondition for rule application of a CSS test assertion may be that the CSS syntax was correct as demonstrated by passing the W3C CSS Validator?
In the "main body", many test assertions contain a reference to the specification requirement covered by the test assertion. The test assertion statement in the "main body" should be "testable", in the sense of the definition of "testability" in QA Framework: Specification Guidelines. Furthermore, a test assertion should support definition of (or be mappable to) (at least one) actual tests that can in fact test the outcome of (evaluate the truth of) that test assertion. Consequently, in terms of the Test Case Metadata, a test assertion may be entered as data for the "purpose" or "description" metadata test elements. NOTE: If the result of the test is unpredictable, it may be because the test assertion needs clarification. The end goal of the "main part" is a representation of the specification requirement as an assertion that is testable.
The post-condition is the expected (correct? observed?) behavior of an implementation of a specification's requirement when a test assertion derived from that requirement is evaluated under the context of the test assertion's precondition. The expected behavior may take the values "true|false|N/A", or may take a spectrum of values denoting the extent to which the implementation behaves as expected (according to the preconditions and any other influences present?). It may be desirable for a test assertion to have a binary outcome (for simplicity?): however, some test assertions may support a spectrum of possible outcomes.
As a more specific instance of a test assertion broken into parts, the Web Services Interoperability Organization Basic Profile 1.1 Test Assertions Version 1.1, gives for each test assertion: test assertion identifier, entry type, test type, enabled flag, message input, WSDL input, prerequisites, profile requirements (targets/), context, assertion description, failure message, failure detail description, and comments. broken into parts. There is a mapping (index) from specific profile requirement to specific test assertion corresponding to that requirement. From these various parts, can a "precondition", "main body", and "postcondition" be created?
As another more specific instance of a test assertion broken into parts, in the Hypertext Markup Language (HTML) 4.01 Test Suite - Assertions, testable assertion has the following: assertion number, specification reference, assertion text, prepended with one of: (author), (must), (should), (may), (informative), and (deprecated), and a series of links to test cases which test that assertion. From these various parts, can a "precondition", "main body", and "postcondition" be created?
As still another illustration, the SOAP version 1.2 Test Assertions each assertion has the following parts: assertion identifier (text string), location of the assertion (specification reference), text from the specification, comments (related to the degree the assertion can be tested), and links to tests pertaining to that assertion. NOTE: Some assertions have the comment that "the assertion will not be tested". From these various parts, can a "precondition", "main body", and "postcondition" be created?
As still another example of parts of a test assertion, in the VoiceXML2.0 (VXML) and SSML1.0 (SSML), the information associated with a single test assertion was the following:
NOTE: From these various parts, can a "precondition", "main body", and "postcondition" be created?
As a final illustration of various test assertion "forms", according to Unisoft-About Test Assertions, assertions may take any of the following forms: bold assertion ("x is y"), cause/effect behavior ("when x occurs, then y results"), and conditional assertion ("if -precondition-: when x, then y").
First some different methods for creating test assertions are described, and then some different approaches to using test assertions in a testing process are mentioned.
Different methods may be used to build assertions. Assertions may be "implicit-not derived" or "derived". Within each of these categories, they may be automatically or manually generated.
For the first category ("implicit-not derived") , assertions may not need to be "derived" at all. That is, they may be text that is actually contained within the specification (identified manually, where reader(s) of the specification mark the specification up, or assertions may be identified by the specification author using embedded markup). Thus, test assertions may be delimited in the specification (that is, they are actually a part of the specification). As such, these test assertions are contained verbatim in the specification language, or the specification is written entirely as a series of test assertions.
Alternatively, assertions may be "derived" (changed?) from the specification manually (by a person who reads text, tables, grammars, etc. and "derives" the assertion by changing the specification text), or automatically (from a formal grammar or other processor). Assertions may also be "derived" using first-order predicate logic.
Formal methods may be used in building test assertions (see presentation on formal methods at the W3C Technical Plenary). The following WGs have used formal methods to some degree to support/ground their work: XML Query WG (XQuery 1.0, XPath 2.0), Web Services Choreography WG, and Web Services Description WG (WSDL 2.0). Using formal methods may force one to read the prose of a specification carefully, which is a good way to review a specification and find errors, as well as to maintain global consistency checking (see Section 3). Use of a formal specifcation versus a prose specification may illustrate the benefits of formal semantics applied to a specification and serve as an implementation guide to a specification. However it may not be appropriate to formalize all aspects of a specification.
Such a "formal" approach, as an example, may resolve issues of dealing with ambiguities of parsing text representations of requirements in specifications (for example, English phrasing), vs. precise mathematical formalisms or first-order predicate logic definitions, which may be more precise than prose. There may also be some internationalization issues, introduced by translations of specification requirements into other languages, which may involve some potential for misinterpretation of requirements if the derived test assertions are also translated or in different languages.
NOTE: From the above discussion, it is likely to more beneficial to "identify" test assertions directly within a specification (by marking up the actual content of the specification) than to "derive" assertions from the specification (which may involve change from the literal content of a specification). In the latter case there is the danger that the intent of the specification will be misinterpreted, and requirements may be defined that are not present in the specification. If assertions can be automatically derived (perhaps from a formal grammar), then the problem of misinterpretation does not exist. Furthermore the number of potential issues (see guiding principles section) may be minimized.
Some examples of ways to create test assertions follow.
Specific techniques for creating test assertions (adapted from QA Framework: Specification Guidelines (QA Spec GL)), may include: (1) creating a template for new specification proposals that includes a section for adding test assertions, (2) identifying all requirements in a specification and try to write corresponding test assertions, , and (3) writing test assertions when adding functionality to a specification. Not being able to write a test assertion for the specification functionality suggests that there is a problem in the way specification functionality is designed or explained.
For CSS1, as a possible approach to building a test assertion, CSS1 properties may be evaluated, and specified values for each evaluated CSS1 property may be expressed in a particular format (from the appropriate section of the CSS1 specification). After this, the CSS1 requirements for constructing declarations and rules (possibly in another section of the CSS1 specification) , may be applied to the property information mentioned previously to actually create the test assertions.
For Web Content Accessibility Guidelines (WCAG) 2.0, a success criterion (assertion) which is designed to be measurable (testable), and "technology- neutral" is derived from the applicable WCAG2.0 guideline. The consensus of the WG decides whether such a success criterion qualifies as a test assertion (is "testable").
The process of generating test assertions for XForms1.0 was decribed by Steven Pemberton of the XForms WG. (NOTE: Steven Pemberton consented in the "Survey of Testing Practices" to have his information made public.) These test materials were built and tested during W3C Candidate Recommendation phase. The process used was: at a face-to-face meeting,
An assertion as well as a section in a specification was typically associated with tests.
The process of generating test assertions for the VoiceXML 2.0 and SSML 1.0 specifications was described by Dave Raggett of the Voice WG. (NOTE: Dave Raggett consented in the "Survey of Testing Practices" to have his information made public.)
The development Process (for both specifications) went as follows:
NOTE: It was deemed very important to allow different people to work on the same topic and to effectively track the progress of this time- consuming activity.
To support this activity, a Web form was created to be filled out; this form was linked to the specification and allowed one to:
In terms of test results availability, both the implementation reports of the results were Extensible Markup Language (XML) template documents which listed, for each test assertion, "PASS", "FAIL", or "NOT-IMPL", and a comment to clarify troubles, errors in the tests, disputable issues, or any needed customization.
Some challenges for this process were mentioned as to keep track of changes of the specification with the development of test assertions.
In addition to having different methods of creating test assertions, there are different methods of using test assertions in a testing process, depending on the actual testing process employed. Examples of some of these methods follow.
The "assert" functions defined in SchemeUnit are used to check for success or failure of the applicable test cases.
As another example, the BioAPI CTS Release Notes describe test assertions included in the release to support the functions listed, as well as the use of an "assertion processor" to handle testing. Assertions may be implemented differently, depending on the actual testing process employed.
As still another example, the WHQL Test Specification FAQ provides information about test assertions and test methodologies for system and device tests, as well as includes "test assertions" in the "test specification" portion of a testing diagram denoting the testing process.
As another example, the VoiceXML2.0 Implementation Report Document discusses the role of test assertions in generating an implementation report for VoiceXML.
A final example details the role of an "assertion processor" in an automated testing approach. In Understanding the WS-I Test Tools, a test assertion serves as input to an analyzer tool, which processes a set of test assertions to determine conformance.
There are many examples of specific test assertions (using some or all of creation methods/parts previously described). A few are mentoned following.
Example 1: From WCAG2.0, SC 1.4.1 : Text or diagrams, and their background, have a luminosity contrast ratio of at least 5:1
HTML Technique for Meeting SC1.4.1 (includes test procedure for that technique): "H21: Not specifying background color, not specifying text color, and not using CSS that changes those defaults"
Example 2: From CSS1, there is a verbatim sentence "This (the color?) property describes the text color of the element.", and the following format:
Value:All of this maps to a specific test of this statement, as qualified by format, as follows:Initial: UA specific Applies to: all elements Inherited: yes Percentage values: N/A
CSS property definition --> CSS rule --> observed rendering of rule using user agent. There is a CSS "color" property, which applies to all elements, and has values red | green.., etc. A CSS rule "p {color: green;} is created (applying a green color to all paragraph elements in a "document", which is "precondition", if this rule is syntactically correct according to CSS (passes the W3C CSS Validator) and HTML technologies, and is included as such in a rendered HTML document (which passes the W3C HTML Validator), then the affected paragraph should be green in the HTML document rendered by the user agent, resulting in the following test assertion: "This paragraph should be green."
Example 3: SOAP version 1.2 Part 1 Assertions
Assertion x1-conformance-part1
Location of the assertion:
SOAP 1.2 Part 1, Section 1.2
Text from the specification:
For an implementation to claim conformance with the SOAP Version 1.2 specification, it MUST correctly implement all mandatory ("MUST") requirements expressed in Part 1 of the SOAP Version 1.2 specification (this document) that pertain to the activity being performed. Note that an implementation is not mandated to implement all the mandatory requirements.
Comments:
This statement applies to all assertions and as such will not be tested separately.
Example 4: HTML 4.01 Test Suite
Assertion 9.2.1-1
Reference: Section 9.2.1
(must) EM: Indicates emphasis. Phrase elements add structural information to text fragments. Start tag and end tag are required.
Tests: 9_2_1-BF-01.html
Example 5: XML Test Suite
Section: Documents
Type: Well_Formed
Purpose: A well formed document must have one or more elements.
Level 1
Example 6:
From the SSML1.0 Implementation Report:
Assert ID: 290
Spec: 2.1
Required: Yes
Manual: No
Test Class: Abs_Rating
Test Level: Simple
Assertion: The meta element must occur before all other elements and text contained within the root speak element.
Example 7:
From the XForms1.0 Implementation Report: Low-level assertion tests ("Chapter 8 - "Form Controls"), links to: Chapter 8 Test Suite, an example of which is: "Common to All Form Controls" (specification reference)
8.1.1-1 Level A (MUST)
Text: "Only values with bound UI controls should be displayed".
Following are principles/goals to be followed when creating test assertions. Following the first few principles (which are considered "fundamental"), the remaining principles are listed in no particular order.
A test assertion should be testable (as in QA Framework: Specification Guidelines definition of "testability"- "A proposition is testable if there is such a procedure that assesses the truth-value of a proposition with a high confidence level"). Consequently, there should be at least one way of measuring the truth of a test assertion with assurance. Whether the confidence level is "high" and is objectively measurable may depend on the content of the test assertion and its context.
Test assertions should be as "short" as possible. It should be a corresponding goal to have a "short" vs. "long" test assertion, measured in number of characters; a "short" test assertion may be more likely to be a "simple" or "atomic" test assertion than a "long" test assertion.
As an example, in the creation of the WCAG2.0 success criteria, even though the success criteria statements are designed to be "technology-neutral", it was ensured that these success criteria were "testable" before they were approved as success critera.
As another measure of "coverage", creators of test assertions should consider how "inclusive" or "exclusive" to make their collection of test assertions. "Exclusivity" may imply a higher degree of rigor, "positivity" or "confidence", may preclude testing "edge statements", and may lead to a smaller number of test assertions. In contrast, "inclusivity" (lower degree of rigor, "positivity" or "confidence", may involve some "edge testing" or imply some subjectivity in testing, and may result in more test assertions, but may be important to the WG involved in testing their specification.
In creating a set of test assertions for a specification, it is probably best to keep the format of all the assertions in the set the same (for consistency and clarity, and as an aid to understanding).