RE: [Test Harness] Additions to the test harness

Hi James, 

Thanks for your quick response, we use these functions today in the Navigation Timings tests, they provide an abstraction and simplify the coding of the tests. 

In these performance tests we ideally want to have all of the results, for example when testing the ordering of timings we would want to see which specific attributes did not comply to the normative requirements. However if there is a single failure, the entire test would be considered not passing. 

Here are some code examples:
	test_true(performanceNamespace.timing.unloadEventStart > 0, 'timing.unloadEventStart is greater than 0');
	test_equals(performanceNamespace.navigation.redirectCount, 0, 'navigation.redirectCount on an non-redirected navigation'); 
test_greater_than is wrapped by test_timing_greater_than(attribute_name, greater_than)

Used here: 
	test_timing_greater_than('navigationStart', 0); 


-----Original Message-----
From: James Graham [] 
Sent: Tuesday, January 25, 2011 1:48 AM
To: Anderson Quach
Subject: Re: [Test Harness] Additions to the test harness

On Mon, 24 Jan 2011, Anderson Quach wrote:

> Hi public-html-testsuite,
> In the Web Performance WG, we would like to integrate some additional 
> common functionality into the common testharness.js that is being developed in the HTML WG [1]. This functionality currently resides under the resources folder under WebPerf[2].
> These helper functions abstract the testing of Booleans, Equality and 
> Greater than. Wed like to add the following helpers to
> testharness.js:

What is the use case for these functions? Do you have some example code? 
On the face of it it looks like the effect would be to have less code wrapped in callbacks to test() or test.step(). This defeats one of the core design goals of the framework which is to allow as many unexpected error conditions as possible to be handled gracefully (i.e. caught by a try/catch handler rather than propogated to the top level).

It is also unclear to me that the failed-asserts-are-fatal model will work well for performance testing. Indeed it seems possible that this is exactly the issue that you are trying to work around by adding these functions. Presumably for a performance test one typically wants all the results, even if one doesn't meet some expectations? Or am I misunderstanding how you intend to use the framework?

Received on Tuesday, 25 January 2011 17:01:45 UTC