W3C home > Mailing lists > Public > whatwg@whatwg.org > November 2014

Re: [whatwg] URL interop status and reference implementation demos

From: James Graham <james@hoppipolla.co.uk>
Date: Wed, 19 Nov 2014 18:20:22 +0000
Message-ID: <546CDF66.1090001@hoppipolla.co.uk>
To: Domenic Denicola <d@domenic.me>, "whatwg@lists.whatwg.org" <whatwg@lists.whatwg.org>
On 19/11/14 16:02, Domenic Denicola wrote:
> From: whatwg [mailto:whatwg-bounces@lists.whatwg.org] On Behalf Of
> James Graham
>> That sounds like unnecessary complexity to me. It means that random
>> third party contributers need to know which repository to submit
>> changes to if they edit the urld testata file. It also means that
>> we have to recreate all the infrastructure we've created around
>> web-platform-tests for the URL repo.
>> Centralization of the test repository has been a big component of
>> making contributing to testing easier, and I would be very
>> reluctant to special-case URL here.
> Hmm. I see your point, but it conflicts with what I consider a best
> practice of having the test code and spec code (and reference
> implementation code) in the same repo so that they co-evolve at the
> exact same pace. Otherwise you have to land multi-sided patches to
> keep them in sync, which inevitably results in the tests falling
> behind. And worse, it discourages the practice of not making any spec
> changes without any accompanying test changes.

In practice very few spec authors actually do that, for various reasons
(limited bandwidth, limited expertise, limited interest in testing,
etc.). Even when they do, designing the system around the needs of spec
authors doesn't work well for the whole lifecycle of the technology;
once the spec is being implemented and shipped it is likely that those
authors will have moved on to spend most of their time on other things,
so won't want to be the ones writing new tests for last year's spec.
However implementation and usage experience will reveal bugs and suggest
areas that require additional testing. These tests will be written
either by people at browser vendors or by random web authors who
experience interop difficulties.

It is one of my goals to make sure that browser vendors — in particular
Mozilla — not only run web-platform-tests but also write tests that end
up upstream. Therefore I am very wary of adding additional complexity to
the contribution process. Making each spec directory a submodule would
certainly do that. Making some spec directories, but not others, into
submodules would be even worse.

> That's why for streams the tests live in the repo, and are run
> against the reference implementation every commit, and every change
> to the spec is accompanied by changes to the reference implementation
> and the tests. I couldn't imagine being able to maintain that
> workflow if the tests lived in another repo.

Well you could do it of course for example by using wpt as a submodule
of that repository or by periodically syncing the test files to wpt.

As it is those tests appear to be written in a way that makes them
incompatible with web-platform-tests and useless for testing browsers.
If that's true, it doesn't really support the idea that we should
structure our repositories to prioritise the contributions of spec
authors over those of other parties.
Received on Wednesday, 19 November 2014 18:20:50 UTC

This archive was generated by hypermail 2.4.0 : Wednesday, 22 January 2020 17:00:26 UTC