RE: classify every section in the spec

My point on using caniuse was that some stuff listed on caniuse is not part of the HTML5 spec (getUserMedia)
or is not a normative requirement (Ruby).

Though indeed some we could use some of this data to indicate an area in the spec that has poor interop or 
lacking implementations.  Some of this is already known and raised within the working group  e.g the scoped 
attribute.  I also don't think the list on 'caniuse' is not sufficient enough in terms of covering the HTML5 spec.

Here is the list from 'caniuse'
Hashchange event
contenteditable attribute (basic support)
defer attribute for external scripts
async attribute for external scripts
dataset & data-* attributes
getElementsByClassName
Canvas (basic support)
Text API for Canvas 
New semantic elements
Video element
Audio element
Inline SVG in HTML5
Drag and Drop
Offline web applications (aka appcache)
input placeholder attribute
classList (DOMTokenList)
Session history management
sandbox attribute for iframes
Form validation 
Progress & Meter
Datalist element
HTML5 form features
Range input type
Number input type
Details & Summary elements
Color input type
Download attribute
Date/time input types
* Ruby annotation
** Toolbar/context menu
** Scoped CSS
*** WebGL - 3D Canvas graphics
*** getUserMedia/Stream API

Note 
*       == Not Normative
**    == Feature is at risk due to lack of implementation
*** == Not part of HTML5 spec

-----Original Message-----
From: James Graham [mailto:jgraham@opera.com] 
Sent: Thursday, February 28, 2013 1:16 AM
To: public-html-testsuite@w3.org
Subject: Re: classify every section in the spec

On 02/28/2013 03:12 AM, Michael Dyck wrote:

> The classes above are synthesized from 3 more basic conditions:
>   (1) whether the section has conformance requirements;
>   (2) whether there are tests for that section; and
>   (3) whether there are known interop issues pertaining to that section.
>
> Note also that these conditions can change when (respectively):
>   (a) the spec is edited;
>   (b) tests are submitted or edited; or
>   (c) a new version of a browser is released.
>
> Robin's coverage report already tells us (1) and (2), and (I gather) 
> can be regenerated at will to reflect changes due to (a) and (b).
>
> Thus, rather than classifying sections into A/B/C/D, you could get the 
> same information with less work (both upfront and ongoing) by just 
> classifying them wrt (3), presence/level of known interop issues.
 >
> (I'm assuming this has to be done by humans, i.e., you can't easily 
> write a script to deduce the level of known interop issues for a 
> section. Can you? I wondered whether we could import data from 
> caniuse.com, but Kris said no.)

In an ideal world, of course, we would find out about known interoperability issues from the tests themselves. Of course, in many cases we might have out-of-band data indicating interop. problems but no tests that show them. This is, of course, a giant red flag that whatever tests we have are providing insufficient coverage of that part of the spec., irrespective of what other metrics say. So I am certainly in favour of utilizing such data where it is available.

The question is where to source the data. I have the impression that often the author community "feels" like some part of a spec isn't reliable in a set of browsers, but other than caniuse.com (which seems fine to use to me, although the data isn't really granular enough), this isn't recorded anywhere. And there isn't a culture of turning "I feel like Appcache is an interop disaster" into "here are a bunch of tests for Appcache to show the problems I've been having".

Received on Tuesday, 12 March 2013 01:49:57 UTC