W3C home > Mailing lists > Public > www-archive@w3.org > May 2013

Re: WebIDL Testing Plan: who is doing what (and why) by when?

From: Cameron McCormack <cam@mcc.id.au>
Date: Tue, 14 May 2013 11:09:27 +1000
Message-ID: <51918EC7.6050609@mcc.id.au>
To: Travis Leithead <travis.leithead@microsoft.com>
CC: Robin Berjon <robin@w3.org>, Arthur Barstow <art.barstow@nokia.com>, Charles McCathieNevile <chaals@yandex-team.ru>, Yves Lafon <ylafon@w3.org>, Philippe Le Hégaret <plh@w3.org>, Tobie Langel <tobie@w3.org>, Dominique Hazael-Massieux <dom@w3.org>, "www-archive@w3.org" <www-archive@w3.org>
Travis Leithead wrote:
> Ultimately, I believe we need to make sure that all the assertions in
> WebIDL have some testing coverage. I started looking at Cameron's
> submitted tests today, and they are a blend of tests that could be
> covered by idlharness.js and those that we would be unable to
> automatically verify using the auto-generated tests.
> I think the next step is to map what parts of WebIDL v1 are already
> covered by the auto-gen'd tests of idlharness.js, and also which
> parts of the spec are covered by Cameron's recently submitted tests
> (I see the tests are all marked up, I just need to go through and
> cross check.) I'll try to do that while I review Cam's tests, and
> also what's in idlharness.js. ETA 2 weeks?
> Between the two, if we have coverage for all the essentials (Cam
> notes some issues where there aren't two testable
> specs/implementations, and we should review those), then we should
> try to move on to the next step, which is an "implementation report",
> right?

Thanks Travis for looking into the coverage.  I think you are right that 
with these tests and those covered by idlharness.js, we should be in the 
position to start putting together an implementation report.  From the 
testing I was doing while writing the tests, I don't think we have two 
passing implementations of all the tests yet.

Also, I imagine we would want to take only the idlharness.js-generated 
tests that correspond to the set of API features we want to rely on. 
Does that sound right?

As for the other half of the exit criteria -- whether specifications are 
correctly using all of the Web IDL features -- then I think we can base 
this on the features we are relying on for the tests.  I don't recall 
coming across any invalid IDL that I wrote tests against.  So I believe 
we can state that we have met this criterion, apart from the exceptions 
I listed in the notes.txt.

I am not sure where the various IDL parser tools come in to this.  There 
isn't a conformance class for IDL processors in the spec, and I'm not 
sure whether the grammar in the spec being actually parseable is 
something that is interesting to demonstrate by having programs that can 
do that.  At least because we would then need to have some tests for 
those programs to show that they are correct.
Received on Tuesday, 14 May 2013 01:10:26 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 12 January 2022 09:52:12 UTC