Re: Request for Feedback On Test Harness

On 02/12/10 21:24, L. David Baron wrote:
> On Wednesday 2010-12-01 14:27 +0100, James Graham wrote:
>> On 12/01/2010 02:08 PM, Henri Sivonen wrote:
>>
>>> However, if people start writing their non-assertion code outside a
>>> test() wrapper, isn't the whole point of the test() wrapper defeated
>>> and onerror becomes necessary anyway?
>>
>> Quite apart from the fact that it is not universally implemented, I
>> don't think onerror really solves any problem. At best it is a way
>> of saying "something in this file failed". It is important to Opera
>> to get a consistent list of tests back from a given file (that is,
>> when there is no crash we want all the tests to run, even if there
>> was an unexpected exception not associated with an assert). When we
>> have imported testsuites that failed to do this, it has caused
>> problems.
>
> What sort of problems?  Are these problems really bad enough to
> justify making writing tests for this test suite much more
> complicated and bad enough to justify prohibiting many existing
> tests that browser developers have from being contributed to the
> test suite?

The problems are mainly endemic of any regression tracking system, which 
as far as I am aware both ourselves and Microsoft use.

At a fundamental level, our testing system relies upon test name-result 
key-value pairs, and coping with changes in how many pairs the test 
sends back to the system is hard to cope with, as there are numerous 
reasons this can occur.

You can't flag a test going from pass/fail to not being there (e.g., due 
to the larger test it is part of throwing an exception somewhere), as it 
could be down to simply the test being removed from the testsuite 
(because it's bogus, a duplicate, etc.), which would then show up as a 
regression on every build done, which is expensive as it takes up 
developer/QA time verifying it and concluding that it is bogus. There 
has to be a way to always get the same set of tests back, and having a 
good, clear, distinction between tests and asserts is certainly 
beneficial. Something like what MochiKit does, e.g.,

     var testDate = isoDate('2005-2-3');
     t.is(testDate.getFullYear(), 2005, "isoDate year ok");
     t.is(testDate.getDate(), 3, "isoDate day ok");
     t.is(testDate.getMonth(), 1, "isoDate month ok");

Has a whole bunch of problems of its own: if one of those functions 
throws an exception then the entire test errors, instead of just the 
specific test. It seems much better for only the test for that function 
to report as having thrown instead of the entire test.

The difficulty I think isn't absolutely inherent; it's seems that it 
should be possible to get it to be not overly more complex than the 
above… if we have setUp and tearDown functions, then we could do 
something similar…

Say:

var testDate;

setUp(function(){
testDate = isoDate('2005-2-3');
});

test(function(){assertEquals(testDate.getFullYear(), 2005);});
test(function(){assertEquals(testDate.getDate(), 3);});
test(function(){assertEquals(testDate.getMonth(), 1);});

I mean, this isn't quite so concise, but it does avoid having problems 
if there are exceptions thrown anywhere.

Thoughts?

-- 
Geoffrey Sneddon — Opera Software
<http://gsnedders.com>
<http://opera.com>

Received on Tuesday, 7 December 2010 14:01:25 UTC