W3C home > Mailing lists > Public > public-test-infra@w3.org > July to September 2015

Re: Helping implementations test error messages

From: James Graham <james@hoppipolla.co.uk>
Date: Thu, 23 Jul 2015 22:49:31 +0100
Message-ID: <55B1616B.9030001@hoppipolla.co.uk>
To: public-test-infra@w3.org
On 23/07/15 13:17, Jeffrey Yasskin wrote:
> Hi testing folks,
>
> In implementing the Web Bluetooth spec, we'd like to test that it
> produces good error messages for developers. However, we'd also like our
> tests to be usable by other implementations without effectively making
> the error messages part of the specification. testharness.js doesn't
> currently have support for this, so I'm looking for guidance on what to
> add to enable it.
>
> An initial proposal from the blink-dev thread on this
> (https://groups.google.com/a/chromium.org/d/topic/blink-dev/8YfKmyq7ugQ/discussion)
> is to have assert_throws(func, {name: 'FooError', printMessage: true})
> call printErrorMessage(e.message), which would be provided by
> testharnessreport.js. Then Blink would put its expected error messages
> in our -expected.txt files, and other implementations would be free to
> check their error messages or not, as they wish. This has the downsides
> that every implementation has to check the same set of error messages,
> if they check any, and that it can't really be used from concurrent tests.

Yeah, this strategy seems like it isn't really going to work to me. In 
general I'm finding it hard to think of a setup that's going to work 
well for Chromium, given the way that the expectation data format works 
there. For example I considered adding something like

implementation_defined(unique_name, data);

Which would add a list of (name, data) pairs to the test result message. 
An implementation using wptrunner could then use this data in 
conjunction with .ini files to make any additional 
implementation-specific checks on that data. It would take some effort 
to implement and be quite a lot of hassle for the people maintaining the 
tests to put the implementation-specific parts in the ini file rather 
than in the test directly.

I can also imagine having some convention around using a 
__implementation__.js file in a directory that could be filled in with 
the data for implementation-specific asserts. Then a script could so 
something like:

<script src="__implementation__.js"</script>
<script>
test(function() {
     assert_equals(navigator.userAgent, implementation.userAgent);
});

This would make updating the tests even more of a hassle because you 
would have to be careful not to overwrite these files. You'd still have 
to be careful to quarantine the UA-specific behaviour too.

Honestly I'm not sure that there are so many cases where we have a mix 
of interoperable and implemenation-defined behaviour that it's 
worthwhile to make a special case to support it. How much do we lose if 
people just don't upstream tests for implementation defined behaviour? 
There are already a number of more fundamental things that are hard to 
upstream, like tests which need non-standard or non-web-exposed APIs.
Received on Thursday, 23 July 2015 21:50:50 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:34:11 UTC