RE: Test suite validity

Jeremy Carroll wrote that Ian over-emphasized testing one technology
at a time, but that is the area of testing that probably has the poorest
support: it must be emphasized enough so that it isn't ignored.

Most of the test cases (all that I've seen) already -do- rely on 
multiple technologies. The simplest public CSS tests I've 
come across rely on a correct HTML renderer or scripting engine 
to provide the information necessary to determine whether the 
CSS tests pass or fail. 

In most cases the display-dependence is necessary for testing because most 
of CSS support is display-related. Two examples that are not display-dependent 
are parsing and much of the cascade. Error detection in both is generally dependent 
on either a scripting or rendering engine for evaluation, second technologies 
with implementations that always have their own bugs.

Granted, it's difficult to test most of CSS in isolation from
a rendering engine; which indicates to me the test harness, 
if it is necessary at all, should be as simple and unobtrusive 
as possible: not a test harness that relies on HTML frames 
support, scripting, images, and forms submission, for example. 
Even a dependence on external stylesheets is a drawback 
in my opinion; dependence on external stylesheets should 
be necessary only for the testing of proper support for external
stylesheets. Additionally, too much dependence can make it 
impossible to run the tests on some devices.

At some point a dependence on other technologies is unavoidable,
even necessary, but the dependence should be minimized as 
much as possible.

--Brad

-----Original Message-----
From: Jeremy Carroll [mailto:jjc@hplb.hpl.hp.com] 
Sent: Friday, March 12, 2004 8:26 AM
To: Mary Brady
Cc: Ian Hickson; www-qa@w3.org; www-dom-ts@w3.org; Tantek Çelik; dom@w3.org; wchang@nist.gov; tmichel@w3.org; mary.brady@nist.gov; Brad Pettit
Subject: Re: Test suite validity



Mary Brady wrote:

> Many of the complexities of the test harness stem from dealing with 
> other technologies, and how each implementation deals with them.


I felt that Ian's talk over-emphasized testing just one technology at a 
time. If the problems occur in using two or three technologies or two or 
three specifications together then test suites should cover those cases. 
This is particularly important where it is not clear which spec covers the 
area since we can get implementorA saying "reading spec A we do it this 
way", and implementorB sayig "reading spec B we do it this other way".

I think a test case is a good way of banging the heads of WG-A and WG-B 
together

(While I have phrased this in my issue-driven mindset, I think the point is 
good for conformance testing too - the goal is interoperable 
implementations in that case)



Jeremy

Received on Friday, 12 March 2004 17:17:35 UTC