W3C home > Mailing lists > Public > public-html@w3.org > September 2012

Re: [HTMLWG] CR Exit Criteria redux

From: James Graham <jgraham@opera.com>
Date: Thu, 27 Sep 2012 09:32:00 +0200 (CEST)
To: Silvia Pfeiffer <silviapfeiffer1@gmail.com>
cc: Adrian Bateman <adrianba@microsoft.com>, Maciej Stachowiak <mjs@apple.com>, "public-html@w3.org" <public-html@w3.org>
Message-ID: <alpine.DEB.2.02.1209270920260.27169@sirius>


On Thu, 27 Sep 2012, Silvia Pfeiffer wrote:

> On Thu, Sep 27, 2012 at 4:32 AM, Adrian Bateman <adrianba@microsoft.com> wrote:
>> On Wednesday, September 26, 2012 8:07 AM, Maciej Stachowiak wrote:
>>> I see your point. But I think such a requirement would be unacceptable to members of
>>> the Accessibility Task Force, who will likely want to submit implementation claims
>>> based on combinations of totally separate software (a browser and a screenreader)
>>> and where it's unlikely the implementor of either piece would make a submission,
>>> let alone both. So I have not added it to the draft CR exit criteria.
>>
>> It is unacceptable to Microsoft that anyone other than Microsoft submit implementation
>> reports for Internet Explorer.
>
> Shouldn't the testing for and decision about UAs having interoperable
> implementations stay with the W3C based on the testing framework that
> we set up?

I am extremly skeptical about using that framework. It isn't well designed 
for running a large number of tests quickly and repeatably. For example it 
has no way to run reftests automatically, and I'm not sure it is even well 
suited to running javascript tests automatically (I don't think it runs 
the tests in a top level browsing context, it required manual setup to 
adjust e.g. popup blocker settings before the test run, it can't recover 
if a particular testcase causes a hang or a crash in a given browser).

The dificult process of getting CSS2.1 to Rec. is a clear demonstration of 
the problems that occur when tests require significant manual effort to 
run. In particular, multiple test runs are likely to be needed, and 
vendors are often not able to invest the multiple man days such runs can 
take if not automated. We don't want to repeat those mistakes here. The 
only way to avoid them is to take results from the testing that vendors do 
anyway as input rather than asking them to repeat that work in a tool less 
fit for purpose.

Of course if the W3C is offering to provide all the resources to do this 
tedious manual work, I guess that's a different proposition. But I would 
advise them against doing so; it seems likely to be much more effort than 
is reasonable.
Received on Thursday, 27 September 2012 07:32:39 UTC

This archive was generated by hypermail 2.3.1 : Monday, 29 September 2014 09:39:34 UTC