RE: New Possible TestGL outline

Great work indeed!

Had couple questions/comments (since I missed the discussion, you might
have covered it there):

- tagging asserts/normative language by categories is a nice idea. We
should probably revisit what do we understand by a test assert though to
enable such tagging. Plus some of categories (like "undefined") don't
have asserts associated with.

>(cross platform stuff goes to ExTec.)
- Was there a reason behind removing "platform independent" criteria
form the guidelines doc?

- system must allow for tests to be filtered based on metadata 
criteria.
Sounds a bit absolute. How much of filtering capability would we like?
For any attribute in the metadata?

- Addition of test cases. For test management system, the previous draft
required ability to add tests for any are of the spec. May sound
trivial, it's a common flaw of the test management systems.

- Ease of use. Looks like gone in this outline. May sound trivial and
non-specific, but important for the conformance test cases.

- Spec coverage. Is this part of the support for metadata?

- Some of the "operational like" checkpoint removed (for good reason)
could find their place at the Operational Guidelines. Shall we make such
analysis, to make sure we are covered?

Again, apologies for not being able to call in last week - was loaded
over the top.



-----Original Message-----
From: pfawcett [mailto:pfawcett@speakeasy.org] 
Sent: Sunday, March 09, 2003 7:31 AM
To: www-qa-wg@w3.org
Subject: New Possible TestGL outline


Howdy folks,

This is a brief break down of the new structure/outline for TestGL that 
Patrick and I
came up with after fridays meeting. We both think that this has a 
significantly better
flow to it, addressing some of Lynne's major general concerns. We also 
tried to steer
away from 'loaded language' like methodology, test framework and a few 
other terms we
found in the document that caused difficulty as they meant different 
things to different
folks. We also tried to condense things down a bit. At the beginning we 
thought we would
be getting rid of a guideline but instead we ended up creating a new 
one. But the checkpoints
are down to a reasonable 2-4 per guideline rather than up to 14 in some.
Finally we made an effort to not duplicate effort (and thus potentially 
get out of sync) with
other documents. Primarily this concerns the interaction of TestGL and 
OpsGL. Ops has a
number of checkpoints concerning how test materials are contributed, 
what criteria
must be followed and so on. So we made an effort not to re-include that 
here.
Many of these do not have priorities assigned yet either.
Finally we tried to steer clear of the "this is the right way to write 
a test suite" and
instead focus on the "this is what a good test suite should be composed 
of".
Lynne's new checkpoint for Issue 107 (that's just been posted would fit 
nicely in to
the new G1 or G2.
If nothing else it can server as a basis for discussion on Monday, is 
this a better organization than before
and are we missing any thing.

Thanks,
Peter

Outline of new guidelines:

G1 - high level functional analysis of spec to determine strategy of 
test development. (was G2-G3)
	- combine 2.1 and 2.2 in to one checkpoint
		analyze the specification and determine how to structure
test 
materials.
		determine what testing areas the specification is
composed of.
	- 3.1 determine how to cover each area. Is only one approach
going to 
be used or will
		there be more than one.
	- 1.10 develop user scenarios for the specification.
	(move 3.2 to ex tec or some such or as descriptive text.)
	
G2 - deep analysis the spec and extract what needs to be tested and how 
(was G1)
	- extract assertions/normative language and tag
		according to category - Using categories provided by
patrick
		In other words rather than having explicit checkpoints
for each 
required,
		optional, depreciated, discretionary, under-defined, etc
asserts,
		have a checkpoint that has all asserts or normative
language 
extracted and
		then grouped by category. It's the same basic idea but
it takes 4 
checkpoints into one.
	- determine level of coverage (depth/breadth) and priority of
coverage 
(what's
		most important).
	
G3 - test management system (was part of G4)
	- Have a test management system
	- the system must support meta data like documentation,
pass/fail 
criteria,
		coverage level, state of test, association back to
asserts,
		and dimensions of variability.

G4 - test suite/cases development (was G6)
	- prototype test framework (6.2)
	(ops deals with submission and criteria for acceptable
submissions)

G5 - test execution. (was part of G4)
	- The metadata from the management system must provide
sufficient 
information
	to allow tests to be executed in a predicable and consistent
way.
	- automation is encouraged (cross platform stuff goes to ExTec.)
	- system must allow for tests to be filtered based on metadata 
criteria. (This
		is where the DOV really enters in for a test suite
rather than in the 
analysis part.
		In the analysis you want to identify them but here is
where you 
really care)
	- the test execution process should save output from tests for 
analysis (pass/fail,
		what cases failed, and logs if relevant).
		
G6 - result reporting (G5)
	- must support result reporting (5.1)
	- it should create reports as a unified package (for example
like a 
web page (5.3))
	- It must indicate what passed and failed.
	- It should be automated if possible
	- It should supported filtering and comparison of results
	
G7 - conformance testing. (G7)
	- encourage vendors to use the test suite. (p1)
	- encourage vendors to publish results (p3)

Received on Monday, 10 March 2003 01:10:36 UTC