W3C home > Mailing lists > Public > public-web-and-tv@w3.org > April 2014

Re: WebTV Help for Getting Engaged in W3C Test Effort

From: Giuseppe Pascale <giuseppep@opera.com>
Date: Mon, 28 Apr 2014 15:49:50 +0200
Message-ID: <CANiD0koOkcnATfxUoLtjeCDR4t14U_SXO=oqouyc+6eYNCU-rQ@mail.gmail.com>
To: Robin Berjon <robin@w3.org>
Cc: "public-test-infra@w3.org" <public-test-infra@w3.org>, "public-web-and-tv@w3.org" <public-web-and-tv@w3.org>
On Mon, Apr 28, 2014 at 2:58 PM, Robin Berjon <robin@w3.org> wrote:
>  On 23/04/2014 14:07 , Giuseppe Pascale wrote:
>> Not sure why, maybe I wasn't clear. All I was asking is a peace of info
>> saying "when I developed this test, I was looking at version X of the
>> spec". or, when someone check it later on "last time I checked, this
>> test was valid for version Y of the spec". It wouldn't be too much work
>> IMO.
although I overall agree with you that keeping metadata up to data is an
hard problem, and the ideal solution is to automatically generate them as
much as possible, some of the issues you point out may be a result of this
group structure/process, and there may be hidden assumptions you make that
are not clear to other people and would be good to make explicit, hence see
my questions inline.

Note that this questions are not intended to request to add work or process
to your group (which I doubt I could do anyhow) but to clarify some of the
questions which have been asked in the last workshop and to set the right
expectations of what people may and may not find in a W3C test suite.

It's a very simple process. When you first create a test, you *might* get
> the metadata right. (Even then it's a big, big "might" because most people
> will copy from an existing file, and through that get wrong metadata.)

I agree that an author may get things wrong, but the reviewer should be
responsible for checking the spec reference. Otherwise I'm not clear what a
reviewed test actually mean.  Isn't the reviewer supposed to check if the
test matches some spec text? If so, and if the author doesn't write which
spec version he is testing, can the reviewer really know what he he
supposed to check?

Maybe the response is: always check the latest editor draft at hand. If so,
as discussed before maybe the spec version can be auto-inferred by the
commit date.

> But when it's updated what's your incentive to update the metadata? What
> points you to remember to update it? Pretty much nothing. If it's wrong,
> what will cause you to notice? Absolutely nothing since it has no effect on
> the test.
Once again I would expect a "reviewer" to be in charge of it in a
structured review process (and I would expect an updated test to be subject
to review). And I assume the reviewer to actually check a spec to see if a
test is valid (or otherwise how does it check its validity)?

Maybe, also here, the answer is implicit (check the latest ED) and can then
be autogenerated knowing the commit date

So far, in the pool of existing contributors and reviewers, we have people
> who benefit greatly from a working test suite, but to my knowledge no one
> who would benefit from up to date metadata. Without that, I see no reason
> that it would happen.
The reason for raising this issue is because during the workshop we had
some people asked about this, i.e. how can they know which tests to use
given a set of specs they reference. E.g. how can I know which tests are up
to date and which one have been written against an old spec (and maybe not
valid anymore?)

This can of course change. If there are people who would benefit from
> metadata I would strongly encourage them to contribute. IMHO the best way
> to do that would be to have an external service that would pull in the list
> of files (from the published manifest) and allow people interested in
> metadata to maintain it there, through a nice and simple Web interface.
> That system could easily poll for updates and queue up required
> verification by the community in charge of metadata. That would avoid
> interfering directly with version control (making changes that impact only
> metadata adds noise) and the review queue (where most of the existing
> reviewers would not be interested in validating metadata changes.

 I believe everything is in place for the system described above to be
> implemented relatively easily. I am fully confident that if there is a
> community that genuinely requires testing metadata they could bash together
> such a tool in under a month. And we're happy to help answer questions and
> provide hooks (e.g. GitHub update hooks) where needed.
sounds like a sensible approach. Maybe that will also help inform this
discussion, i.e. to identify if there are some basic metadata which are
needed, missing and that an external group cannot generate.

This is a volunteer and so far largely unfunded project. It is also by a
> wide margin the best thing available for Web testing today. Its shape and
> functionality matches what current contributors are interested in; if there
> are new interests not so far catered to, the solution is simple: just bring
> in new contributors interested in this!
The goal of this conversation is to bring new contributors, as
(understandably) some people didn't want to commit to something which
looked like a black box (to them).

Received on Monday, 28 April 2014 13:50:38 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:44:13 UTC