Re: WebTV Help for Getting Engaged in W3C Test Effort

On 23/04/14 07:31, Giuseppe Pascale wrote:
> James,
> thanks for your reply. Would be good if you can keep the TV list (and
> myself) in CC as I/we are not all on public-test-infra
>
> I'm quoting your answer below entirely for the benefit of the TV folks+add
> some more questions/comments
>
> From: James Graham <james@hoppipolla.co.uk>
>> Date: Tue, 22 Apr 2014 17:56:52 +0100
>> On 22/04/14 17:22, Giuseppe Pascale wrote:
>>> In no particular order, here a set of questions I've heard from various
>>> people+some comments from me. Can you help address them? I would like to
>>> invite other IG participants to chime in if I forgot something:
>>>
>>> 1. the first question was about "where to find information on the W3C
>>> testing setup and material". Bryan tried to answer with the mail below.
>> In
>>> short it seems to me that the starting point is
>>> http://testthewebforward.org/docs/. Please chime in if anything needs
>> to be
>>> added
>> Yes, that's the right site. The idea is to centralise all the useful
>> information there. Since it is documenting an ongoing software
>> development process, I have no doubt that the documentation could be
>> improved. One slight problem with our current setup is that the TestTWF
>> docs are in a seperate repo, so it's easy to forget to update those docs
>> when making changes to e.g. testharness.js.
>>
>
> are there then plans to coordinate this eve further and have everything
> documented in this central page?

I think that *is* the plan. To the extent that it hasn't happened, it's 
because people (including myself) haven't written docs to accompany 
changes. I suspect there are changes we can make so this is easier for 
everyone.

>>> 4. Let's assume some organizations/companies decide the contribute to the
>>> W3C effort. What are the plans when it comes to maintaining the test that
>>> gets submitted? Are there processes in place to make sure that if spec
>>> changes, tests are "invalidated"? In other words, how can I know, at any
>>> given time, if a test suite for a given spec is still valid? And who is
>> in
>>> charge to check that tests are still valid when a spec gets updated?
>> Also,
>>> are there way to "challenge" a test, i.e. to say that a given (approved)
>>> test is in fact invalid?
>> It's very hard to automatically invalidate tests when the spec changes.
>> Even if we had lots of metadata linking tests to spec sections — which
>> we don't — it is quite common for a test to depend on many things other
>> than that which it claims to be testing. And requiring a lot of metadata
>> adds an unacceptable overhead to the test authoring process (I have seen
>> cases where people have has testsuites, but have refused to submit them
>> to common testsuites due to metadata overheads).
>>
>
> Agree that a  lot of metadata may be overhead. But there is probably a
> middle group between no metadata and a lot of them. For example, even
> though you may not be able to automatically track if changes in the spec
> imply changes in the tests, would be valuable to know against which version
> of the spec a given test was written. Later on, if the spec changes, people
> running the tests should be able to also update such information to
> indicate that are still valid for a given spec.

That actually sounds like quite a lot of work to keep up to date. It is 
already a reasonably hard sell to get browser vendors to write 
web-platform-tests rather than whatever proprietary test formats they 
are used to, without telling them that every time they update the 
implementation they are on the hook to update metadata in every single 
existing test. In fact I'm rather sure that they simply wouldn't do this 
and that the metadata and reality of the test would rapidly diverge.

> This would be relatively small overhead and would give you at least an idea
> of how recently the test was checked.

Bolt-on metadata (as opposed to intrinsic metadata of the kind we get 
from e.g. git commit history) has two problems; it requires non-trivial 
effort to add, and it imposes ongoing maintenance that is often 
neglected because the metadata is non-essential to the actual 
functioning parts of the system. Therefore I am very skeptical of the 
value of even "low overhead" metadata.

of course if there are people who do see the value in such metadata it 
is possible for those parties to add the data and maintain it. This 
avoids externalising the cost onto people who aren't experiencing the 
benefits.

> In practice the way we expect to deal with these things is to have
>> implementations actually run the tests and see what breaks when they are
>> updated to match the new spec.
>>> 5. IIRC not all WGs are using the process/tools from TTWF. Is this
>>> documented somewhere? Will these other groups continue with their tool
>> for
>>> the time being or is there any plan to merge the various efforts at some
>>> point?
>> CSS, at least, currently use different repositories. I think there is a
>> considerable advantage to everyone sharing the same infrastructure, but
>> at the moment there are not concrete plans to merge the repositories.
>>> 6. Do the test include metadata that easily allow to (at the very list)
>>> extract relevant test for a given spec? Are these
>>> mandatory/checked/maintained?
>> The directory structure reflects the structure of the specs; each spec
>> has its own top level directory and subdirectories within that
>> correspond to sections of the spec.
>
>
> So when spec section changes (and we have seen this happening with HTML5)
> you will move test around? Maybe this is not that common?

This isn't common, but yes I expect we would move tests if the spec is 
reorganised.

>> Some tests include more metadata,
>> but this is not required.
>
>
> can you give us an indication of what kind of metadata (if any) are
> required for each test? Or is there no requirement for metadata at all and
> it's always optional?

There is no requirement for metadata that doesn't directly affect the 
running of the tests. We do require in-test annotations to indicate if 
the test requires an unusually long timeout when run. We also require 
in-test data to identify the reference type and reference file for 
reftests, and in-filename annotaions to indicate non-automated tests.

>> Where it has been added I expect it is often
>> wrong. I am much more interested in finding ways to automatically
>> associate tests and parts of specs e.g. by instrumenting browsers to
>> report which apis are called by each test, or by looking at code coverage.
>>
>
> is there anything happening on this or it's just something to look at at
> some point?

It's something to look at at some point, but I really hope that I will 
get the chance to do so soon.

Received on Wednesday, 23 April 2014 09:36:21 UTC