W3C home > Mailing lists > Public > public-test-infra@w3.org > April to June 2014

Re: WebTV Help for Getting Engaged in W3C Test Effort

From: James Graham <james@hoppipolla.co.uk>
Date: Wed, 23 Apr 2014 17:56:31 +0100
Message-ID: <5357F0BF.4050703@hoppipolla.co.uk>
To: Giuseppe Pascale <giuseppep@opera.com>
CC: "SULLIVAN, BRYAN L" <bs3131@att.com>, Robin Berjon <robin@w3.org>, Tobie Langel <tobie@w3.org>, "public-test-infra@w3.org" <public-test-infra@w3.org>, "public-web-and-tv@w3.org" <public-web-and-tv@w3.org>
On 23/04/14 13:07, Giuseppe Pascale wrote:
> On Wed, Apr 23, 2014 at 11:35 AM, James Graham
>>> Agree that a  lot of metadata may be overhead. But there is probably a
>>> middle group between no metadata and a lot of them. For example, even
>>> though you may not be able to automatically track if changes in the spec
>>> imply changes in the tests, would be valuable to know against which
>>> version
>>> of the spec a given test was written. Later on, if the spec changes,
>>> people
>>> running the tests should be able to also update such information to
>>> indicate that are still valid for a given spec.
>> That actually sounds like quite a lot of work to keep up to date.
> Not sure why, maybe I wasn't clear. All I was asking is a peace of info
> saying "when I developed this test, I was looking at version X of the
> spec". or, when someone check it later on "last time I checked, this test
> was valid for version Y of the spec". It wouldn't be too much work IMO.

Let's focus on the update scenario. Lets say that the spec is updated in 
a way that invalidates some tests. Someone then goes to fix their 
implementation. They find a few tests that fail after they have updated, 
and after realising that the tests are outdated they now not only have 
to fix the tests, but also fix the metadata. Moreover they really ought 
to go through all the other tests that they didn't update and update the 
metadata to say that the test is believed to be correct under some later 
version of the spec. Although that actually leads to a confirmation 
bias, so really they should first vet the every single test against the 
spec and then update all the metadata.

Realistically people aren't going to do this. They will do the minimum 
work possible i.e. just update the tests that fail, ignoring all 
metadata and any other work that isn't required to achieve their primary 
goal of landing the implementation change. We can try to enforce this 
through code review, of course, but if we find ourselves asking people 
to make non-functional changes to the tests after they have already 
fixed their implementation and moved on to the next task they will 
either ignore us, or come to resent web-platform-tests and advocate 
using other testsuites (e.g. internal proprietary tests) with fewer 
"useless" requirements.

> Anyhow maybe this could be autogenerated as you say, if this was built into
> the process, e.g. if the "approval" commit is on date XYZ then the relevant
> spec is the latest WD before or up to XYZ. Now, I'm not sure if there is a
> way to automate this.

I would simply use the vc history of the spec and the test. If "git 
annotate" tells you that a particular part of the spec was last changed 
on a certain date, and the test was last edited before that date you 
probably want to examine the test rather closely for correctness.
Received on Wednesday, 23 April 2014 16:56:58 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 17:34:10 UTC