RE: [Test Harness] Additions to the test harness

Hi James, 

Thanks for your quick response, we use these functions today in the Navigation Timings tests, they provide an abstraction and simplify the coding of the tests. 

In these performance tests we ideally want to have all of the results, for example when testing the ordering of timings we would want to see which specific attributes did not comply to the normative requirements. However if there is a single failure, the entire test would be considered not passing. 

Here are some code examples:

http://test.w3.org/webperf/tests/submission/Microsoft/NavigationTiming/test_timing_unloadEvent.htm
	test_true(performanceNamespace.timing.unloadEventStart > 0, 'timing.unloadEventStart is greater than 0');

http://test.w3.org/webperf/tests/approved/test_navigation_redirectCount_none.htm
	test_equals(performanceNamespace.navigation.redirectCount, 0, 'navigation.redirectCount on an non-redirected navigation');

http://test.w3.org/webperf/tests/resources/webperftestharness.js 
test_greater_than is wrapped by test_timing_greater_than(attribute_name, greater_than)

Used here: http://test.w3.org/webperf/tests/submission/Microsoft/NavigationTiming/test_timing_attributes_ordering_simple.htm 
	test_timing_greater_than('navigationStart', 0); 


Best, 
Anderson

-----Original Message-----
From: James Graham [mailto:jgraham@opera.com] 
Sent: Tuesday, January 25, 2011 1:48 AM
To: Anderson Quach
Cc: public-html-testsuite@w3.org
Subject: Re: [Test Harness] Additions to the test harness



On Mon, 24 Jan 2011, Anderson Quach wrote:

> 
> Hi public-html-testsuite,
> 
>  
> 
> In the Web Performance WG, we would like to integrate some additional 
> common functionality into the common testharness.js that is being developed in the HTML WG [1]. This functionality currently resides under the resources folder under WebPerf[2].
> 
>  
> 
> These helper functions abstract the testing of Booleans, Equality and 
> Greater than. We¢d like to add the following helpers to
> testharness.js:

What is the use case for these functions? Do you have some example code? 
On the face of it it looks like the effect would be to have less code wrapped in callbacks to test() or test.step(). This defeats one of the core design goals of the framework which is to allow as many unexpected error conditions as possible to be handled gracefully (i.e. caught by a try/catch handler rather than propogated to the top level).

It is also unclear to me that the failed-asserts-are-fatal model will work well for performance testing. Indeed it seems possible that this is exactly the issue that you are trying to work around by adding these functions. Presumably for a performance test one typically wants all the results, even if one doesn't meet some expectations? Or am I misunderstanding how you intend to use the framework?

Received on Tuesday, 25 January 2011 17:01:45 UTC