W3C home > Mailing lists > Public > www-qa-wg@w3.org > October 2003

draft Minutes QA F2F 23 October 2003, morning

From: <lsr@nist.gov>
Date: Thu, 23 Oct 2003 14:12:39 -0400
Message-ID: <1066932759.3f981a179ad5e@imp.nist.gov>
To: www-qa-wg@w3.org
Cc: ot@w3.org

Minutes: 23 Oct 2003

TestGL Progression:  
Editorial resources = Patrick. Dom to help with formatting, Vanitha to help 
with issues list. 
Working Draft to WG by end of Nov, to include concepts and other front 
matter.   Need a published WD before Last Call.   
Need a schedule.  
As WGs are engaged to capture their experiences, Need to develop a 
template/questionnaire/outline of points to capture during the discussion – 
ACTION: Vanitha by 3rd week of Nov.  

TestGL Guideline Discussion

CP 4.1 Define the test execution process [Priority 1]
Repeatable and Reproducible. 
Suggest a new CP to have test materials define if they are repeatable and 
reproducible.  Need to supply definitions.    ACTION: Lynne to supply 
definitions – Nov 1.
In discussion that these terms are goals that developers should strive to 
achieve.  Add a CP –Document where there are areas that are not repeatable 
and/or reproducible and why.  
If specific to a test, it should be documented in the metadata for the test.  

Rationale discussion:  Does the order to run the test materials go here?  
Should comment on it here.  This talks about how to run the tests.  Make sure 
that every test that should be run are run and those that shouldn’t be run are 
not run.  

4.2 Automate the test execution process
Requirements are that (1) Must be automated and (2) platform independent.  Not 
always need or possible to have cross-platform.  Take platform independent out 
of requirement and put into discussion as a goal

4.3 Integrate results reporting into the automated test execution process
Results reporting is the aggregation of the results.  The more you expect the 
harness to do the more issues you have with implementers ability to use the 
harness, since they may want to use their own for their own reasons.  So, 
require that there be a test harness, but allow for people to substitute their 
own reporting mechanism or harness.  Should there be instructions for using the 
test suite (with test harness) indicate that the harness is not required for 
making a conformance claim.   Important issue, should be captured, but out of 
scope of this CP.   Put in generic discussion for Guideline 4 – keeping it 
general, since this has broader applicability.

G5: Documenting and Packaging
5.1 Review the test materials
Move first sentence of ConfReq to the discussion.  Object of this CP is test 
material management system.  Both modules (objects) are required for 
conformance.  The review is that this has been in use for several years and 
being used – that would be a valid review.  Review all test materials. 
Add ‘all’ to all applicable places – need to review use of ‘all’ in entire 

5.2 Document the test materials

5.3 Package the test materials into a test suite
Test Suite is all the pieces of materials needed, wrapped up together.  As 
written not testable.  Make a minimal list of what MUST be provided, including: 
user documentation, IPR, test harness if supplied, referenced output if 
defined.  Need to make sure that ‘test suite’ is understood.  Test suite is the 
package (sum) of all the components needed to test an implementation.  Test 
materials are the components that make up the test suite.  

5.4 Test the test suite
Management system is the object.  David to help define the 2 objects of 
TestGL.  In discussion, include the frequency of applying the test plan and 

5.5 Solicit feedback on the test materials
define how to give feedback and that feedback needs to be used. 

Guideline 6 Define the process for reporting test results
6.1 Tests should report their status in a consistent manner
Need to add definitions for the terms.  Reword.  Remove Cannot Tell.  In 
discussion, reference that terms came from EARL.  Can you map these to other 
states or must you 
use these ‘states’?   If these apply, then MUST use them.  These are states, we 
have definitions of the states, definitions are normative, not providing labels 
for the states, if state applies, use it.  Recommend that if English, use these 
labels. Change status to outcome. 

6.2 Tests should report diagnostic information
Must provide diagnostic information, remainder of sentence is rationale. 

6.3 Define and document the process for reporting test results
As is, it is already in OpsGL
Rewrite as Define an interface to allow publishing of results.

6.4 Allow test results to be filtered
Have a results management system. 
6.5 Automate the results reporting system
automate the system.

Test Case Description Language

Goal is to get reaction to the proposal.  
DM’s overview and introduction to the document. Deliberately marked this as 
v1.0, since there will be further integration of this with other projects and 
as there is more experience and reaction to this.  Earlier versions of this 
have been seen in the OASIS TC, playing around with it in XQuery.  The document 
includes guidance to the WG, providing a meta-specification.  The intent is to 
nail down what will go along with the test suite as its catalogue.  Mary 
Brady’s comments not direct impact, her concerns can be accommodated.  This is 
more about the package that comes out of the WG.  Have a set of modules and 
then the WG could decide which modules (e.g., dependency module) to include.  
Not intended to be a controlling filter on test cases.  Designed to support 
automation, but not presume automation.  Presume that reference output is 
available, but not the format of the output.  The WG would define the set of 
scenarios, where a scenario is a complete description of a single case and its 

Alex’s thoughts:  Mixing two distinct goals – (1) to document and assemble 
information about test cases and related test information, (2) more vague, but 
help WG to do its thing. For example, it is confusing why description of test 
case related to download success.  Download part of test lab operations that 
need to make sure they have all the necessary components to execute the test 
properly.  This does not dictate the tools or environment needed to execute the 
tests.    Goal should be to document test cases and not how to execute them.  
To succeed TCDL would have multiple tools that use it.  These tools shouldn’t 
be in scope.  

Patrick’s thoughts:  Is this rich enough to provide all the metadata to allow a 
consuming program to select an appropriate set of tests and run those tests?  
It does:  (1) initial set up, (2) execution of the subject to produce its 
results (3) comparison, (4) clean up, archiving, post-operation maintenance, 
etc.  First impression is that it isn’t comprehensive enough.  

TCDL is limited to metadata on test cases, including parameters for test cases. 
Need to make this explicit.  TCDL is metadata about the test case code, not the 
code. Seems that this is at a low level and structured towards validation 
testing, comparing input/output, but not help API or protocol.  Is it possible 
to be general and useful?  Want to enable different test labs (consumers) to 
download the appropriate test sets.   Perhaps need to limit goal and scope.  It 
captures the majority of testing that W3C does.  

Next Steps. 
Is this worth pursuing in the WG?  Yes, but, other than DM, seems that there 
aren’t resources to work on it.   Good to have this as a tool within the 
arsenal of tools.  ACTION David to let us know how he wants the WG to consider 
this work – endorse it, contribute to development, etc. 
Received on Thursday, 23 October 2003 14:12:59 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 21:14:31 UTC