W3C home > Mailing lists > Public > www-qa-wg@w3.org > August 2005

draft minutes: 8 Aug 2005 Dublin F2F, pm

From: lynne rosenthal <lynne.rosenthal@nist.gov>
Date: Mon, 08 Aug 2005 11:49:24 -0400
Message-Id: <>
To: www-qa-wg@w3.org
Minutes for:
F2F Dublin, 8 August 2005, pm
Scribe: Lynne

Summary of New Action Items:
AI-20050808-1: Lofton: Create a TOC list of QA Handbook Good 
Practices.  August 12

1.  QA Handbook
Any ideas on how to promulgate this document?  What is the life of this 
document after QAWG closes?
It would be useful to complete the templates – charter and process 
templates.  The lack of comments on QA Handbook indicates that there isn’t 
much interest.  Currently published as a Note.  Possible ways to promulgate 
    * There are documents for chairs (especially for new chairs), add a 
link to QA Handbook
    * When groups are formed, point to it.  Also point to Test FAQ.
    * When hear about a new test effort, point to it and Test FAQ.
The Handbook is very tied to W3C process and thus, not very useful outside W3C.
A detailed Table of Contents that lists the good practices needs to be 
added. Changes to the Handbook means that we need to republish it.

2. Primer
What is needed is a graph or executive summary to provide the reader with a 
preview of what the document is about, what is contained in the document, 
and who would be interested in the document.  A picture to show this would 
be helpful.

3. Test ML
The beginning of a collection of metadata for test suites and information 
is available on the wiki 
Having a common format, schema, language would facilitate the development 
of testing tools and consistency between tools.  This could be work 
continued under the IG.  This week we could begin to develop requirements 
or an outline for Test ML.  What is produced needs to be practical and 
useful – for example, having a schema or template.  Even having a list of 
metadata with an explanation and rationale of why to include in test suite, 
would be helpful.

List of metadata  (Some are taken from Dubin Core)
In addition to the metadata and its description, should be an indication as 
to whether it is mandatory or optional. The metadata should be independent 
of its implementation, i.e., is the metadata separate from the test or 
embedded into the test.
1. Identification: be able to uniquely refer to each test case. This is 
different from versioning.
2. Name.  human readable name, allowing one to refer to a given test case
3. Purpose: why the test case is needed.
4. Description: human readable description, what the test case does and how
What is the difference between Purpose and Description? The description 
provides ability to include additional information. Optional
5. Status of test case:  assuming there is a review process for test cases, 
this would indicate whether the test is approved, rejected modified, 
etc.  Disagreement as to whether this item is optional or mandatory.  This 
item may be dependent upon if there is a development process, at what stage 
of the test suite this is important – this is important during development, 
but may not be important for a published test suite.
6. Versioning: difficult and tricky.  Many have struggled with this – it 
depends on the evolution of the specification as well as the test suite.
7. Link to Spec: this could be a link, pointer. This may not be human 
readable. It may be text into a spec, URL, etc. This relates to the purpose 
and description, and relates to versioning.
8. Link to Issue:  This may be very appropriate for test driven 
development.  The test case may illustrate/solve a problem or 
issue.  Perhaps this is backwards – the issue should like to the test case 
since that is what drives the test. It would be redundant to have the issue 
point to the test and the test point to the issue
9. Dependencies: This could indicate preconditions, running one test before 
another test.  In Test FAQ we say to avoid sequencing.
10. Grouping: useful if test cases are grouped together in some way.  This 
can be a way to manage test cases.  Often keywords are the mechanism to 
group tests together.  Keywords would provide knowledge management about 
the test.
11. variability-driven filtering criteria: can filter via keywords.  Is 
this the same as Grouping?  Label tests as to which profiles they are 
appropriate for.
12. Input (pre/post conditions?):  Additional information/data that is 
necessary for running the tests and getting results. Not the same as 
13. Expected results: Is this the same as post-condition.
14: Creator or contributor: author, organization, etc that created the test
15: Dates: what date? There are many that could be associated with a 
test.  We should create a list of what these are.  One or more dates could 
be associated with a test
16: Type: Is this the same as Grouping or by keywords? An example is 
positive vs negative tests, static vs dynamic, manual vs automatic.
17: Source: where test came from – e.g., derived from another test
18: Language: human languages - associated with other metadata elements. 
This is metadata about metadata or an attribute of another element. 
Probably don’t want to include this.  We may want to include a caveat or 
explanation about this, i.e., metadata about metadata since there are 
several of these.
19. Coverage: extent or scope of the content of the resource includes 
pointer to the specification.
20. Rights (rights management).
21. Priority:  not worth including.
22:  References or see also: related information.

Conducted review of various test metadata: RDF, OWL, CSS, XML, DOM, XSLT, 
XQuery. Looks like the list of metadata includes what is already in these 
test suites. XQuery contains some good guidelines or practices for test 
Introduction and scope of test case metadata needs to be very clear.  It is 
important that it is clear whether it is relevant to the development of the 
test or using the test.

Next steps: develop a general template.  Include with each element: 
description, rationale, syntax requirement (format, human readable), 
requirement (mandatory, optional), use case, example, relevancy 
(development vs test usage), dependency or relationship to another element, 
link to Test FAQ if applicable.

Metadata Element:
Name: Purpose
Description: the requirement or an explanation of the requirement to be tested
Required?  Yes
Rationale: for development: helps to plan and verify the level of coverage; 
for usage: execution, helps users determine relevance and understand result.
Syntax: hypertext (text+ links)
Relevance: development, usage (execution)
See also: Link to spec, Description (dup?)
Link to Test FAQ: Question 7
Example: “This test verifies that when condition A and B applies, the 
processor does the right thing”
Received on Monday, 8 August 2005 15:50:13 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 22:43:40 UTC