Proposed Rewrite of "Guide to Test Assertions" - Tim Boland - May 1, 2006

  1. Introduction

    This document is a guide to defining and creating test assertions for specifications. Its purpose is to help you understand what test assertions are, why create them, and most importantly, how to create them.

    This document is addressed to specification writers, World Wide Web Consortium (W3C) Working Group (WG) chairs, software developers, W3C WG members, and others interested in testing and quality issues. There are both W3C and non-W3C references in this document, and so it is hoped that non-W3C members will find this document useful and applicable as well. This document is meant to appeal to the novice as well as experienced creator of test assertions. After this section (introduction), sections are presented on: test assertions definitions, benefits/rationale of test assertions, methods for building/using test assertions, examples of test assertions, and finally some general guiding principles for test assertion development. After reading this document, you should have a basic understanding of test assertions and their importance, and be ready to create test assertions for your purposes.

    This is a document produced by the W3C Quality Assurance Interest Group (QAIG). Consequently, this document is intended to be a "companion" document to the following other W3C QA documents: QA Framework: Specification Guidelines (QA Spec GL), Test Development FAQ, Test Case Metadata Note, and the QA Wiki: Testable or not? information. These other W3C QA documents were used as resources in preparing this document.

  2. Definitions

    There are several slightly varying definitions of "test assertions" and similar concepts, depending on the approach/point of view of the definer. These definitions (as well as similar concepts) are listed following for completeness. However, it may be more important to emphasize the "common" functionality implied/inherent in these varying definitions than to concentrate on the nuances of the differences themselves?

    For example, QA Framework: Specification Guidelines states: "A test assertion is a measurable or testable statement of behavior, action, or condition. It is contained within or derived from the specification's requirements and provides a normative foundation from which one or more test cases can be built."

    Patrick Curran, of the QAIG, states that "Test assertions are simple, unambiguous, declarative statements of required functionality. They are either contained within or derived from the specification".

    Another definition of "test assertion" is at the Unisoft Site, as in:

    "A test assertion (also known as a test description), details an individual unit of functionality or behavior derived from statements contained in the API specification to be tested."

    Still another definition of test assertion is at the Understanding the WS-I Test Tools Site, as in:

    "A test assertion is a testable expression of one or more requirements in the Basic Profile."

    The W3C QA Glossary defines "test assertion" as " a set of premises that are known to be true by definition in the specification."

    Finally, a definition of "assert" (from Webster's New Riverside University Dictionary, 1984) is "to state or express positively". Thus, by extension, a "test assertion" might be " positive statement that can be tested"?

    A possible issue in defining "test assertion" may be that there are different terms used for similar concepts in multiple documents related to testing. For example, the term "conformance assertion" is used in BioAPI CTS Release">the BioAPI Release Notes; is this the same concept as a "test assertion" here?

    In WHQL Test Specification FAQ, "test assertion" is mentioned as being included in a "test specification"; is "test specification" the same as "test assertion" as defined previously? In the BentoWeb Project "test assertion" is part of the "test purpose" ; is 'test purpose" the same as "test assertion" as defined previously?

    Finally, the term "test assertion" is not used at all in IEEE 829: 1998 Standard for Software Test Documentation , which documents the testing of software, and specifies eight stages in the documentation process (test plan, test design specification, test case specification, test procedure specification, test item transmittal report, test log, test incident report, and test summary report). The first two of the stages (which seem to deal with the "test assertion" concept?) are:

    "Test Plan: A detail of how the test will proceed, who will do the testing, what will be tested, in how much time the test will take place, and to what quality level the test will be performed", and

    "Test Design Specification: A detail of the test conditions and the expected outcome. This document also includes details of how a successful test will be recognized."

    Are "test plan" and "test design specification" the same concept as "test assertion" as defined previously?

  3. Rationale and Benefits of Test Assertions

    Goals of successful specification development may be: (1) widespread use/adoption, and (2) interoperablity of implementations . The use of test assertions may advance these goals by: (1) benefitting a specification's quality, and (2) improving the quality of tests of conformance to that specification. These benefits are complementary because test assertions represent an important connection from the text of a specification to a test suite that will verify a specification's conformance. This is because test assertions are derived from the requirements of a specification, and connects a specification to the tests pertaining to that specification.

    In terms of specification quality, test assertions can provide invaluable insights on the meaning of conformance to a specification. It should be possible to identify test assertions within a specification in order for a specification's quality to be assured. Otherwise, implementors may interpret specification requirements differently, leading to incompatible implementations. It may be difficult to verify that a specification is "testable" (see Section 4) without test assertions. Thus, the motivation for creation of test assertions is to promote widespread implementation and adoption of a specification's technology.

    To be "testable", specifications need to be verifiable, or correct. Thus, unambiguous semantics in a specification are desirable so that there is a precise idea of behavior, offering guidelines for implementation of a specification. For specifications, test assertions may help to clarify unambiguous semantics in illustrating how conformance requirements may be satisfied.

    Test assertions facilitate the development of consistent, complete specifications. Developing or extracting test assertions helps uncover inconsistencies, gaps, and non-testable statements in the specification. The activity of developing test assertions may identify and force clarification of ambiguities, contradictions, and even omissions in a specification, particularly early in a specification's development process. Thus the use of test assertions can provide early feedback to the editors regarding areas that need attention.

    In terms of improving quality of tests/test suites, test assertions may be important because they can be an important foundation to the testing process of a specification. Use of test assertions can promote the early development of tests; since each test assertion may be broken down into one or multiple tests, test assertions can be a starting point for developing conformance tests, or a conformance test suite, that is widely used and respected, and is of demonstrable "quality". In turn, conformance tests should address test assertions from which they are derived. Test assertions may be valuable as as input to test development efforts at the beginning of the test development process. In summary, without test assertions, test developers may not be able to develop a useful test suite.

    There is anecdotal evidence to support the benefits mentioned previously of test assertions in satisfying the previously-stated goals. First, the QA Framework: Specification Guidelines (QA Spec GL) Good Practice 12 states "write test assertions"; this veracity of this good pracite was confirmed in the review process for progression of this document to Recommendation status.

    Second, the use of test assertions in testing seems to be somewhat widespead (per examples following), which anecdotally would seem to support the efficacy of their use in testing.

    Specifically, for XForms 1.0 , a benefit specifically mentioned in the context of response to a survey was that the process of generating the assertions identified overly vague language in places. It was also specifically mentioned in the context of another response to this same survey that identifying assertions before the progression of a specification to W3C Candidate Recommendation status was advantageous.

    In a separate response to the same survey for VoiceXML and SSML, a form-based process used to track test assertions was determined to be a key aspect for efficient management of the development of the implementation reports for these two technologies.

    Cascading Style Sheets (CSS) is widespread, and test assertions were used in the preparation of the CSS1 Test Suite. Similarly, web services, as in Web Services Interoperability Organization Basic Profile 1.1 Test Assertions Version 1.1 are becoming more popular, and test assertions (see previous reference) were used in evaluating the quality of Web services specifications.

  4. Attributes/Characteristics of Test Assertions

    Test assertions can take different forms, but it may be more beneficial to notice the commonalities (or common functionalities implied in these different forms) than to concentrate on the differences in representations.

    In general, test assertions are statements, processable by humans or machines.

    Test assertions may be composed of (implicitly or explicitly) several different parts, or "functions", which may be described generally as: "precondition", "main body", and "postcondition".

    A "pre-condition" establishes the context under which a specification requirement applies. Since testability of a test assertion may be relative to the context in which the test assertion is used, a test assertion can be testable in one context, and not testable in another context. Some examples of "precondition" follow: an authoring tools test assertion relating to accessibility of preview modes would only apply if the tool contained a preview mode, or a Web Content Accessibility Guidelines (WCAG) 2.0 success criterion (test assertion) dealing with content not violating the Red Flash Threshold would only apply to flashing content A precondition for rule application of a CSS test assertion may be that the CSS syntax was correct as demonstrated by passing the W3C CSS Validator?

    In the "main body", many test assertions contain a reference to the specification requirement covered by the test assertion. The test assertion statement in the "main body" should be "testable", in the sense of the definition of "testability" in QA Framework: Specification Guidelines. Furthermore, a test assertion should support definition of (or be mappable to) (at least one) actual tests that can in fact test the outcome of (evaluate the truth of) that test assertion. Consequently, in terms of the Test Case Metadata, a test assertion may be entered as data for the "purpose" or "description" metadata test elements. NOTE: If the result of the test is unpredictable, it may be because the test assertion needs clarification. The end goal of the "main part" is a representation of the specification requirement as an assertion that is testable.

    The post-condition is the expected (correct? observed?) behavior of an implementation of a specification's requirement when a test assertion derived from that requirement is evaluated under the context of the test assertion's precondition. The expected behavior may take the values "true|false|N/A", or may take a spectrum of values denoting the extent to which the implementation behaves as expected (according to the preconditions and any other influences present?). It may be desirable for a test assertion to have a binary outcome (for simplicity?): however, some test assertions may support a spectrum of possible outcomes.

    As a more specific instance of a test assertion broken into parts, the Web Services Interoperability Organization Basic Profile 1.1 Test Assertions Version 1.1, gives for each test assertion: test assertion identifier, entry type, test type, enabled flag, message input, WSDL input, prerequisites, profile requirements (targets/), context, assertion description, failure message, failure detail description, and comments. broken into parts. There is a mapping (index) from specific profile requirement to specific test assertion corresponding to that requirement. From these various parts, can a "precondition", "main body", and "postcondition" be created?

    As another more specific instance of a test assertion broken into parts, in the Hypertext Markup Language (HTML) 4.01 Test Suite - Assertions, testable assertion has the following: assertion number, specification reference, assertion text, prepended with one of: (author), (must), (should), (may), (informative), and (deprecated), and a series of links to test cases which test that assertion. From these various parts, can a "precondition", "main body", and "postcondition" be created?

    As still another illustration, the SOAP version 1.2 Test Assertions each assertion has the following parts: assertion identifier (text string), location of the assertion (specification reference), text from the specification, comments (related to the degree the assertion can be tested), and links to tests pertaining to that assertion. NOTE: Some assertions have the comment that "the assertion will not be tested". From these various parts, can a "precondition", "main body", and "postcondition" be created?

    As still another example of parts of a test assertion, in the VoiceXML2.0 (VXML) and SSML1.0 (SSML), the information associated with a single test assertion was the following:

    1. Assertion ID (unique identifier)
    2. Specification: link to section of the specification
    3. Description of the test assertion (piece of text taken from the specification)
    4. Owner (for sending e-mail to the referenced individual(s))
    5. Status: see http://studio.tellme.com/vxml2_ir/whatis_status.html or similar reference
    6. Conformance Level: Optional / Required
    7. Manual Execution: yes / No
    8. for VXML only - Absolute URI: yes/no
    9. for VXML only - lang Dependence: yes/no
    10. for SSML only - test_level: Absolute Rating / Single Pair / Multiple Pair
    11. for SSML only - test_class: simple / medium / complex
    12. Comment area
    13. Associated tests

    NOTE: From these various parts, can a "precondition", "main body", and "postcondition" be created?

    As a final illustration of various test assertion "forms", according to Unisoft-About Test Assertions, assertions may take any of the following forms: bold assertion ("x is y"), cause/effect behavior ("when x occurs, then y results"), and conditional assertion ("if -precondition-: when x, then y").

  5. Methods for Building/Using Test Assertions

    First some different methods for creating test assertions are described, and then some different approaches to using test assertions in a testing process are mentioned.

    Different methods may be used to build assertions. Assertions may be "implicit-not derived" or "derived". Within each of these categories, they may be automatically or manually generated.

    For the first category ("implicit-not derived") , assertions may not need to be "derived" at all. That is, they may be text that is actually contained within the specification (identified manually, where reader(s) of the specification mark the specification up, or assertions may be identified by the specification author using embedded markup). Thus, test assertions may be delimited in the specification (that is, they are actually a part of the specification). As such, these test assertions are contained verbatim in the specification language, or the specification is written entirely as a series of test assertions.

    Alternatively, assertions may be "derived" (changed?) from the specification manually (by a person who reads text, tables, grammars, etc. and "derives" the assertion by changing the specification text), or automatically (from a formal grammar or other processor). Assertions may also be "derived" using first-order predicate logic.

    Formal methods may be used in building test assertions (see presentation on formal methods at the W3C Technical Plenary). The following WGs have used formal methods to some degree to support/ground their work: XML Query WG (XQuery 1.0, XPath 2.0), Web Services Choreography WG, and Web Services Description WG (WSDL 2.0). Using formal methods may force one to read the prose of a specification carefully, which is a good way to review a specification and find errors, as well as to maintain global consistency checking (see Section 3). Use of a formal specifcation versus a prose specification may illustrate the benefits of formal semantics applied to a specification and serve as an implementation guide to a specification. However it may not be appropriate to formalize all aspects of a specification.

    Such a "formal" approach, as an example, may resolve issues of dealing with ambiguities of parsing text representations of requirements in specifications (for example, English phrasing), vs. precise mathematical formalisms or first-order predicate logic definitions, which may be more precise than prose. There may also be some internationalization issues, introduced by translations of specification requirements into other languages, which may involve some potential for misinterpretation of requirements if the derived test assertions are also translated or in different languages.

    NOTE: From the above discussion, it is likely to more beneficial to "identify" test assertions directly within a specification (by marking up the actual content of the specification) than to "derive" assertions from the specification (which may involve change from the literal content of a specification). In the latter case there is the danger that the intent of the specification will be misinterpreted, and requirements may be defined that are not present in the specification. If assertions can be automatically derived (perhaps from a formal grammar), then the problem of misinterpretation does not exist. Furthermore the number of potential issues (see guiding principles section) may be minimized.

    Some examples of ways to create test assertions follow.

    Specific techniques for creating test assertions (adapted from QA Framework: Specification Guidelines (QA Spec GL)), may include: (1) creating a template for new specification proposals that includes a section for adding test assertions, (2) identifying all requirements in a specification and try to write corresponding test assertions, , and (3) writing test assertions when adding functionality to a specification. Not being able to write a test assertion for the specification functionality suggests that there is a problem in the way specification functionality is designed or explained.

    For CSS1, as a possible approach to building a test assertion, CSS1 properties may be evaluated, and specified values for each evaluated CSS1 property may be expressed in a particular format (from the appropriate section of the CSS1 specification). After this, the CSS1 requirements for constructing declarations and rules (possibly in another section of the CSS1 specification) , may be applied to the property information mentioned previously to actually create the test assertions.

    For Web Content Accessibility Guidelines (WCAG) 2.0, a success criterion (assertion) which is designed to be measurable (testable), and "technology- neutral" is derived from the applicable WCAG2.0 guideline. The consensus of the WG decides whether such a success criterion qualifies as a test assertion (is "testable").

    The process of generating test assertions for XForms1.0 was decribed by Steven Pemberton of the XForms WG. (NOTE: Steven Pemberton consented in the "Survey of Testing Practices" to have his information made public.) These test materials were built and tested during W3C Candidate Recommendation phase. The process used was: at a face-to-face meeting,

    1. the WG was split into "buddy" groups of two people
    2. each group identified assertions in one section of the specification
    3. the groups then created a test case to check each assertion, or identified a test case that already tested the assertion
    4. the groups then created a number of fairly large complete examples to test interaction of features (there was no methodology here: the groups just created some large real-world examples)

    An assertion as well as a section in a specification was typically associated with tests.

    The process of generating test assertions for the VoiceXML 2.0 and SSML 1.0 specifications was described by Dave Raggett of the Voice WG. (NOTE: Dave Raggett consented in the "Survey of Testing Practices" to have his information made public.)

    The development Process (for both specifications) went as follows:

    1. The WG worked first to split the specification into pieces and assign them to different people in subgroups (NOTE: the subgroups were rather small (2/3 people), but were working far away and in different time zones).
    2. The subgroups wrote down test assertions (very close to excerpts of the current specification that were testable) and reviewed them to see coverage and implementability.
    3. Then some tests were developed for each test assertion; to simplify the work there was a restriction of one test for test assertion mainly.
    4. Finally the tests were reviewed by people different from the implementers.

    NOTE: It was deemed very important to allow different people to work on the same topic and to effectively track the progress of this time- consuming activity.

    To support this activity, a Web form was created to be filled out; this form was linked to the specification and allowed one to:

    1. insert a new test assertion related to a section of the specification
    2. add techical information on a test assertion,
    3. provide a description of a test assertion
    4. state whether a test assertion was required or optional
    5. state whether a test assertion was automatable or not
    6. provide input related to an exchange area for saving textual comments on a test assertion
    7. update test assertion status to see the progress of a test assertion as: New, Study, Accepted, Authoring, Completed, Incomplete, Reviewed, and Rejected
    8. develop and hopefully test the single test associated with a test assertion

    In terms of test results availability, both the implementation reports of the results were Extensible Markup Language (XML) template documents which listed, for each test assertion, "PASS", "FAIL", or "NOT-IMPL", and a comment to clarify troubles, errors in the tests, disputable issues, or any needed customization.

    Some challenges for this process were mentioned as to keep track of changes of the specification with the development of test assertions.

    In addition to having different methods of creating test assertions, there are different methods of using test assertions in a testing process, depending on the actual testing process employed. Examples of some of these methods follow.

    The "assert" functions defined in SchemeUnit are used to check for success or failure of the applicable test cases.

    As another example, the BioAPI CTS Release Notes describe test assertions included in the release to support the functions listed, as well as the use of an "assertion processor" to handle testing. Assertions may be implemented differently, depending on the actual testing process employed.

    As still another example, the WHQL Test Specification FAQ provides information about test assertions and test methodologies for system and device tests, as well as includes "test assertions" in the "test specification" portion of a testing diagram denoting the testing process.

    As another example, the VoiceXML2.0 Implementation Report Document discusses the role of test assertions in generating an implementation report for VoiceXML.

    A final example details the role of an "assertion processor" in an automated testing approach. In Understanding the WS-I Test Tools, a test assertion serves as input to an analyzer tool, which processes a set of test assertions to determine conformance.

  6. Examples of Specific Test Assertions

    There are many examples of specific test assertions (using some or all of creation methods/parts previously described). A few are mentoned following.

    Example 1: From WCAG2.0, SC 1.4.1 : Text or diagrams, and their background, have a luminosity contrast ratio of at least 5:1

    HTML Technique for Meeting SC1.4.1 (includes test procedure for that technique): "H21: Not specifying background color, not specifying text color, and not using CSS that changes those defaults"

    Example 2: From CSS1, there is a verbatim sentence "This (the color?) property describes the text color of the element.", and the following format:

    Value: 
    Initial: UA specific
    Applies to: all elements
    Inherited: yes
    Percentage values: N/A
    
    All of this maps to a specific test of this statement, as qualified by format, as follows:

    CSS property definition --> CSS rule --> observed rendering of rule using user agent. There is a CSS "color" property, which applies to all elements, and has values red | green.., etc. A CSS rule "p {color: green;} is created (applying a green color to all paragraph elements in a "document", which is "precondition", if this rule is syntactically correct according to CSS (passes the W3C CSS Validator) and HTML technologies, and is included as such in a rendered HTML document (which passes the W3C HTML Validator), then the affected paragraph should be green in the HTML document rendered by the user agent, resulting in the following test assertion: "This paragraph should be green."

    Example 3: SOAP version 1.2 Part 1 Assertions

    Assertion x1-conformance-part1

    Location of the assertion:

    SOAP 1.2 Part 1, Section 1.2

    Text from the specification:

    For an implementation to claim conformance with the SOAP Version 1.2 specification, it MUST correctly implement all mandatory ("MUST") requirements expressed in Part 1 of the SOAP Version 1.2 specification (this document) that pertain to the activity being performed. Note that an implementation is not mandated to implement all the mandatory requirements.

    Comments:

    This statement applies to all assertions and as such will not be tested separately.

    Example 4: HTML 4.01 Test Suite

    Assertion 9.2.1-1

    Reference: Section 9.2.1

    (must) EM: Indicates emphasis. Phrase elements add structural information to text fragments. Start tag and end tag are required.

    Tests: 9_2_1-BF-01.html

    Example 5: XML Test Suite

    Section: Documents

    Type: Well_Formed

    Purpose: A well formed document must have one or more elements.

    Level 1

    Example 6:

    From the SSML1.0 Implementation Report:

    Assert ID: 290

    Spec: 2.1

    Required: Yes

    Manual: No

    Test Class: Abs_Rating

    Test Level: Simple

    Assertion: The meta element must occur before all other elements and text contained within the root speak element.

    Example 7:

    From the XForms1.0 Implementation Report: Low-level assertion tests ("Chapter 8 - "Form Controls"), links to: Chapter 8 Test Suite, an example of which is: "Common to All Form Controls" (specification reference)

    8.1.1-1 Level A (MUST)

    Text: "Only values with bound UI controls should be displayed".

  7. General, Guiding Principles for Creating Test Assertions

    Following are principles/goals to be followed when creating test assertions. Following the first few principles (which are considered "fundamental"), the remaining principles are listed in no particular order.

    1. It is important not to change the semantics of the requirements of the specification in deriving the test assertions from the specification. If one is deriving test assertions from the specification rather than simply identifying statements (test assertions) from the specification, then the primary responsibility is to ensure that the test assertions derived are "faithful" to the specification. The best way to do this is to have the specification authors review the assertions. NOTE: Perhaps an additional check on this principle is to create some tests using the assertions and determine as objectively as possible that these tests actually test the specification requirements.

      A test assertion should be testable (as in QA Framework: Specification Guidelines definition of "testability"- "A proposition is testable if there is such a procedure that assesses the truth-value of a proposition with a high confidence level"). Consequently, there should be at least one way of measuring the truth of a test assertion with assurance. Whether the confidence level is "high" and is objectively measurable may depend on the content of the test assertion and its context.

    2. Assertions should be structured so that the results of their evaluation should be as self-evident as possible.

    3. Assertions should be as precise (specific), or unambiguous as possible.

    4. A test assertion should make "atomicity" a goal, where the definition of "atomicity" in the context of a specification is up to each specification's WG. A possible way to evaluate this goal is to attempt to "subdivide" the assertion - is the result a complete statement? It is good to try to break specification requirements into smaller, testable pieces if possible. Each test assertion should state one separate (but complete) "requirement" of a specification. For example, in CSS, a section could have many property definitions/values, and thus many "atomic" test assertions related to those definitions/values. For CSS, there would be developed "atomic" test assertions, each corresponding to a particular property attribute/value.

      Test assertions should be as "short" as possible. It should be a corresponding goal to have a "short" vs. "long" test assertion, measured in number of characters; a "short" test assertion may be more likely to be a "simple" or "atomic" test assertion than a "long" test assertion.

    5. Test assertions should be independent of one another. They should represent different, nonoverlapping specification requirements.

    6. Each test assertion should be kept as simple as possible. Simplicity helps to minimize unintended effects from the format of the test assertion itself. A possible metric of simplicity might be the ability to use a simple, basic text editor to generate/ manage test assertions. Simple test assertions may be easier to understand, and thus to test against (because side-effects are not present). Also, identifying the point of failure to satisfy a given requirement of a specification may be easier with simpler ("shorter"?) assertions than with more complex ("longer"?) assertions.

    7. Test assertions should map to a small number (perhaps one) of corresponding tests. If there are an excessively large number of tests that can be derived from a test assertion, then perhaps the test assertion is too complex and needs to be subdivided.

    8. Groupings of test assertions should be derived from and be aligned with the specification itself, according to the nature of the specification, since test assertions come directly from the specification. Thus, it is a goal that test assertions should follow the relationships expressed in the specification. Each specification has its own logical ordering to be followed in testing. The requirements are organized and related differently within each specification. For example, in the HTML 4.01 Test Suite-Assertions, the assertions are grouped by section of the HTML specification. As another example, WCAG2.0 has success criteria (testable assertions) grouped into four separate accessibility principles, which may be accessed in linear order.

    9. Traceability of test assertions to requirements in the specification should be documented/ensured. NOTE: There can be differences introduced between requirements as expressed in the test assertions, and those requirements in the specification itself, when test assertions are separately derived from the specification requirements; thus perhaps it may be helpful to document exactly how the test assertions are derived from the specification requirements, so that assurance can be achieved that requirements are unchanged between specification and corresponding test assertion(s).

    10. It should be a goal in a test assertion to test directly only the technology requirements from the applicable specification, and not ancillary/supporting requirements of other technologies from other specifications. NOTE: In some specifications, it is possible to make the assertions "technology-neutral", because these specifications are not "tied to" a particular technology per se. For example, the success criteria of WCAG2.0 are designed to be "technology neutral". However, other specifications necessarily deal with technologies, and so the corresponding assertions in that case cannot be "technology-neutral"; an example of this is CSS.

    11. Do not state how to test the test assertion explicitly in the test assertion. The language of test assertions should be independent of whether the testing is manual or automated. A test assertion should not be a prescribed method for exactly in detail how to test a specification requirement. However, it is a goal in the creation of a test assertion to make sure that a test assertion can be tested (is testable), such that there is at least one way to test, or to (as objectively as possible) measure that the test assertion is satisfied.

      As an example, in the creation of the WCAG2.0 success criteria, even though the success criteria statements are designed to be "technology-neutral", it was ensured that these success criteria were "testable" before they were approved as success critera.

    12. It is important not to rely upon formatting or context to convey intention in a test assertion (in other words, it is important to separate content from presentation in a test assertion). However, to the extent that context influences a requirement in a specification, that same influence should be preserved in the derived test assertion so that the requirement does not change from specification to assertion.

    13. There should be a sufficient number or "coverage metric" of test assertions to adequately demonstate conformance to the specification, per the requirements of the specification. A possible measure of this is to measure the "coverage" of the tests (derived from the test assertions) in relation to the specification's requirements. A more complex specification may potentially contain thousands of test assertions. NOTE: From Test Development FAQ, test assertions are listed as one way to "partition" a specification to determine coverage goals. As an example of solving a coverage issue for a particular technology, for XForms, all MUST assertions must be tested, and others may be tested.

    14. Test assertions should be as "low-level" as possible for the specification/technology being tested. It is important to try to avoid "high-level" test assertions that must be broken down into multiple "lower-level" test assertions before they can be tested. As a side note, a degree of specificity may be reflected in the "level" of a test assertion. "Technology-neutral" assertions (like WCAG2.0 success criteria) may be "higher-level" than technology-specific WCAG2.0 techniques used to test those success criteria; WCAG2.0 test procedures were actually created along with the test assertions themselves. Similary, there could be a higher-level "test assertion" for a stated requirement in a specification, and a "lower-level" test assertion (possibly including specific language bindings) for testing software which claims to implement that specification's requirement. As a final example, low-level assertion tests for XForms are discussed in: the XForms Implementation Report (in contrast with high-level feature tests).

      As another measure of "coverage", creators of test assertions should consider how "inclusive" or "exclusive" to make their collection of test assertions. "Exclusivity" may imply a higher degree of rigor, "positivity" or "confidence", may preclude testing "edge statements", and may lead to a smaller number of test assertions. In contrast, "inclusivity" (lower degree of rigor, "positivity" or "confidence", may involve some "edge testing" or imply some subjectivity in testing, and may result in more test assertions, but may be important to the WG involved in testing their specification.

    15. In designing the format of test assertions, there should be some consideration as to the ways test assertions may be used in the testing process? For example, will test assertions be automatically processed by a processor? If so, what format does the processor expect?

      In creating a set of test assertions for a specification, it is probably best to keep the format of all the assertions in the set the same (for consistency and clarity, and as an aid to understanding).

    16. In creating test assertions for one or more specifications, consideration should be given as to how assertions may be identified (for example, by assertion number), as well as in general how assertions will be described and managed (for example, by using metadata?).

    17. It should be a goal that testers work with the specification authors during the development phase of the specification, helping to tighten up and clarify the language of the specification so that assertions can be unambiguously identified while the specification is being created ( as opposed to after the specification has been finished, when this process may be more difficult).