Re: [whatwg] URL interop status and reference implementation demos

On 11/19/2014 09:55 AM, Domenic Denicola wrote:
> From: Sam Ruby [mailto:rubys@intertwingly.net]
>
>> These results compare user agents against each other.  The testdata
>> is provided for reference.
>
> Then why is testdata listed as a user agent?

It clearly is mislabled.  Pull requests welcome.  :-)

>> I am not of the opinion that the testdata should be treated as
>> anything other than as a proposal at this point.  Or to put it
>> another way, if browser behavior is converging to something other
>> than what than what the spec says, then perhaps it is the spec that
>> should change.
>
> Sure. But I was hoping to see a list of user agents that differed
> from the test data, so we could target the problematic cases. As is
> I'm not sure how to interpret a row that reads "user agents with
> differences: testdata chrome firefox ie" versus one that reads "user
> agents with differences: ie safari".

I guess I didn't make the point clearly before.  This is not a waterfall 
process where somebody writes down a spec and expects implementations to 
eventually catch up.  That line of thinking sometimes leads to browsers 
closing issues as WONTFIX.  For example:

https://code.google.com/p/chromium/issues/detail?id=257354

Instead I hope that the spec is open to change (and, actually, the list 
of open bug reports is clear evidence that this is the case), and that 
implies that "differing from the spec" isn't isomorphically equal to 
"problematic case".  More precisely: it may be the spec that needs to 
change.

>> web-platform-tests is huge.  I only need a small piece.  So for
>> now, I'm making do with a "wget" in my Makefile, and two patch
>> files which cover material that hasn't yet made it upstream.
>
> Right, I was suggesting the other way around: hosting the
> evolving-along-with-the-standard testdata.txt inside whatwg/url, and
> letting web-platform-tests pull that in (with e.g. a submodule).

Works for me :-)

That being said, there seems to be a highly evolved review process for 
test data, and on the face of it, that seems to be something worth 
keeping.  Unless there is evidence that it is broken, I'd be inclined to 
keep it as it is.

In fact, once I have refactored the test data from the javascript code 
in my setter tests, I'll likely suggest that it be added to 
web-platform-tests.

- Sam Ruby

Received on Wednesday, 19 November 2014 16:48:34 UTC