W3C home > Mailing lists > Public > public-webapps-testsuite@w3.org > May 2013

Re: WebIDL Testing Plan: who is doing what (and why) by when?

From: Arthur Barstow <art.barstow@nokia.com>
Date: Tue, 21 May 2013 09:09:20 -0400
Message-ID: <519B7200.9020001@nokia.com>
To: Cameron McCormack <cam@mcc.id.au>, Travis Leithead <travis.leithead@microsoft.com>
CC: Robin Berjon <robin@w3.org>, Charles McCathieNevile <chaals@yandex-team.ru>, Yves Lafon <ylafon@w3.org>, Philippe Le Hégaret <plh@w3.org>, Tobie Langel <tobie@w3.org>, Dominique Hazael-Massieux <dom@w3.org>, "www-archive@w3.org" <www-archive@w3.org>, "public-webapps-testsuite@w3.org" <public-webapps-testsuite@w3.org>
On 5/13/13 9:09 PM, ext Cameron McCormack wrote:
> Travis Leithead wrote:
>> Ultimately, I believe we need to make sure that all the assertions in
>> WebIDL have some testing coverage. I started looking at Cameron's
>> submitted tests today, and they are a blend of tests that could be
>> covered by idlharness.js and those that we would be unable to
>> automatically verify using the auto-generated tests.
>>
>> I think the next step is to map what parts of WebIDL v1 are already
>> covered by the auto-gen'd tests of idlharness.js, and also which
>> parts of the spec are covered by Cameron's recently submitted tests
>> (I see the tests are all marked up, I just need to go through and
>> cross check.) I'll try to do that while I review Cam's tests, and
>> also what's in idlharness.js. ETA 2 weeks?
>>
>> Between the two, if we have coverage for all the essentials (Cam
>> notes some issues where there aren't two testable
>> specs/implementations, and we should review those), then we should
>> try to move on to the next step, which is an "implementation report",
>> right?
>
> Thanks Travis for looking into the coverage.  I think you are right 
> that with these tests and those covered by idlharness.js, we should be 
> in the position to start putting together an implementation report.  
> From the testing I was doing while writing the tests, I don't think we 
> have two passing implementations of all the tests yet.
>
> Also, I imagine we would want to take only the idlharness.js-generated 
> tests that correspond to the set of API features we want to rely on. 
> Does that sound right?
>
> As for the other half of the exit criteria -- whether specifications 
> are correctly using all of the Web IDL features -- then I think we can 
> base this on the features we are relying on for the tests.  I don't 
> recall coming across any invalid IDL that I wrote tests against.  So I 
> believe we can state that we have met this criterion, apart from the 
> exceptions I listed in the notes.txt.
>
> I am not sure where the various IDL parser tools come in to this. 
> There isn't a conformance class for IDL processors in the spec, and 
> I'm not sure whether the grammar in the spec being actually parseable 
> is something that is interesting to demonstrate by having programs 
> that can do that.  At least because we would then need to have some 
> tests for those programs to show that they are correct.

(Sorry for the delayed response.)

The above seems like a reasonable plan to me.

Travis - can we assume you will continue to lead this effort and create 
the Implementation Report?

If there are tasks for others, please let us know how we can help with 
this effort.

-Thanks, AB
Received on Tuesday, 21 May 2013 13:10:22 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 18:52:58 UTC