W3C home > Mailing lists > Public > public-web-and-tv@w3.org > April 2014

Re: WebTV Help for Getting Engaged in W3C Test Effort

From: Giuseppe Pascale <giuseppep@opera.com>
Date: Wed, 23 Apr 2014 14:07:35 +0200
Message-ID: <CANiD0kp0d01z1aELNw9BWBkXTRRPiX0Ymat=C7_QE_dmwCSdAQ@mail.gmail.com>
To: James Graham <james@hoppipolla.co.uk>
Cc: "SULLIVAN, BRYAN L" <bs3131@att.com>, Robin Berjon <robin@w3.org>, Tobie Langel <tobie@w3.org>, "public-test-infra@w3.org" <public-test-infra@w3.org>, "public-web-and-tv@w3.org" <public-web-and-tv@w3.org>
On Wed, Apr 23, 2014 at 11:35 AM, James Graham
>
>  4. Let's assume some organizations/companies decide the contribute to the
>>>> W3C effort. What are the plans when it comes to maintaining the test
>>>> that
>>>> gets submitted? Are there processes in place to make sure that if spec
>>>> changes, tests are "invalidated"? In other words, how can I know, at any
>>>> given time, if a test suite for a given spec is still valid? And who is
>>>>
>>> in
>>>
>>>> charge to check that tests are still valid when a spec gets updated?
>>>>
>>> Also,
>>>
>>>> are there way to "challenge" a test, i.e. to say that a given (approved)
>>>> test is in fact invalid?
>>>>
>>> It's very hard to automatically invalidate tests when the spec changes.
>>> Even if we had lots of metadata linking tests to spec sections — which
>>> we don't — it is quite common for a test to depend on many things other
>>> than that which it claims to be testing. And requiring a lot of metadata
>>> adds an unacceptable overhead to the test authoring process (I have seen
>>> cases where people have has testsuites, but have refused to submit them
>>> to common testsuites due to metadata overheads).
>>>
>>>
>> Agree that a  lot of metadata may be overhead. But there is probably a
>> middle group between no metadata and a lot of them. For example, even
>> though you may not be able to automatically track if changes in the spec
>> imply changes in the tests, would be valuable to know against which
>> version
>> of the spec a given test was written. Later on, if the spec changes,
>> people
>> running the tests should be able to also update such information to
>> indicate that are still valid for a given spec.
>>
>
> That actually sounds like quite a lot of work to keep up to date.


Not sure why, maybe I wasn't clear. All I was asking is a peace of info
saying "when I developed this test, I was looking at version X of the
spec". or, when someone check it later on "last time I checked, this test
was valid for version Y of the spec". It wouldn't be too much work IMO.

Anyhow maybe this could be autogenerated as you say, if this was built into
the process, e.g. if the "approval" commit is on date XYZ then the relevant
spec is the latest WD before or up to XYZ. Now, I'm not sure if there is a
way to automate this.



>  Where it has been added I expect it is often
>>> wrong. I am much more interested in finding ways to automatically
>>> associate tests and parts of specs e.g. by instrumenting browsers to
>>> report which apis are called by each test, or by looking at code
>>> coverage.
>>>
>>>
>> is there anything happening on this or it's just something to look at at
>> some point?
>>
>
> It's something to look at at some point, but I really hope that I will get
> the chance to do so soon.
>
>
Automation is definetively the way to go. Maybe after coming up with a list
of "metadata" that TV people would like to see, the next steps would be to
think how to automate that and identify the really minimal set of info that
needs to be manually coded and extract the rest via scripts.

/g
Received on Wednesday, 23 April 2014 12:08:22 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:44:13 UTC