Re: WebPlatform Browser Support Info

Hi, Niels–

On 10/19/13 8:04 AM, Niels Leenheer wrote:
>
> On Oct 19, 2013, at 1:45 PM, PhistucK <phistuck@gmail.com> wrote:
>
>> At least regarding QuirksMode (and this is just a guess), it looks
>> like the raw data exists, just not exposed to the readers, simply
>> because it is too verbose (even more than the already exposed
>> data).
>>
>> I think we should exchange both of them (conclusions and tests, if
>> they have that data), not only conclusions, if the parties are
>> willing to release this data. The more, the merrier.

I think that CanIUse also exposes their tests on github, but I could be 
misremembering.

In any case, I don't want to ask people to share data that they don't 
want to share. I still think there is value in sharing conclusions.


> Oh, absolutely.
>
> The way I see this is that there are two levels:
> a) conclusion
> b) test results
>
> Both levels can use data from various external sources: For level a)
> you can have Caniuse, Quirksmode, MobileHTML5, HTML5test, MDN and
> maybe others. For level b) you can have Test the web forward and many
> other sources.
>
> Based on level b) the Webplatform team could build their own
> conclusions and use that as another source on level a).
>
> The Webplatform site could then use the data from level a) and
> present it in various ways to the users. The raw data from level b)
> is not exposed on the website.

Yes, this seems reasonable. So, the data model should be able to handle 
results (or claims) that don't have tests associated with them, as well 
as links to tests and results where they do exist. We'd already 
identified this case, since MDN doesn't have test-based data.

If we don't have tests to share for a result/claim for a feature, then 
we might choose to adjust the confidence level for that result, to 
inform users of the possibly disparity, but in most cases, I don't think 
this will be that much of an issue.


That said, I do look forward to exposing as many tests as possible, for 
a number of reasons:

1) it gives the user the chance to examine the test itself, and possibly 
find bugs (e.g., false positives, false negatives, or corner cases);

2) it lets us run the test on new versions of browsers, or on different UAs;

3) it allows the user to reference the test as an example (I've done 
this many times with SVG tests).


Ultimately, we hope to be able to use W3C test suites for this, but 
that's in the future.


Regards-
-Doug

Received on Sunday, 20 October 2013 05:25:24 UTC