W3C home > Mailing lists > Public > www-archive@w3.org > May 2013

RE: WebIDL Testing Plan: who is doing what (and why) by when?

From: Travis Leithead <travis.leithead@microsoft.com>
Date: Tue, 14 May 2013 00:33:36 +0000
To: Robin Berjon <robin@w3.org>, Arthur Barstow <art.barstow@nokia.com>
CC: Charles McCathieNevile <chaals@yandex-team.ru>, Yves Lafon <ylafon@w3.org>, Philippe Le Hégaret <plh@w3.org>, "Tobie Langel" <tobie@w3.org>, Cameron McCormack <cam@mcc.id.au>, "Dominique Hazael-Massieux" <dom@w3.org>, "www-archive@w3.org" <www-archive@w3.org>
Message-ID: <cd2eccf2775047d0b3b70bd1a605f7ba@SN2PR03MB077.namprd03.prod.outlook.com>
> From: Robin Berjon [mailto:robin@w3.org]
> On 13/05/2013 15:16 , Arthur Barstow wrote:
> > * Web IDL parser: there are at least two [idlharness.js] and
> > [webidl2.js]. Which one should be used for CR testing; has anyone
> > committed to maintaining and completing the parser; how is it used
> > vis-a-vis the CR exit criteria?
> 
> idlharness.js is not a WebIDL parser, it uses webidl2.js under the hood.
> As Dom said however, we do have wildproc. Both are believed to be as
> correct as we can figure out.
> 
> > * Cameron's Web IDL tests submitted May 12 [Cameron]. How does this
> > relate to Travis' plan and the parser work?
> 
> I was hoping that that would actually be enough to transition.

Ultimately, I believe we need to make sure that all the assertions in WebIDL have some testing coverage. I started looking at Cameron's submitted tests today, and they are a blend of tests that could be covered by idlharness.js and those that we would be unable to automatically verify using the auto-generated tests. 

I think the next step is to map what parts of WebIDL v1 are already covered by the auto-gen'd tests of idlharness.js, and also which parts of the spec are covered by Cameron's recently submitted tests (I see the tests are all marked up, I just need to go through and cross check.) I'll try to do that while I review Cam's tests, and also what's in idlharness.js. ETA 2 weeks?

Between the two, if we have coverage for all the essentials (Cam notes some issues where there aren't two testable specs/implementations, and we should review those), then we should try to move on to the next step, which is an "implementation report", right?
Received on Tuesday, 14 May 2013 00:34:43 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 12 January 2022 09:52:12 UTC