W3C home > Mailing lists > Public > public-webapps-testsuite@w3.org > January 2012

Re: Approval process for DOM4 tests

From: Aryeh Gregor <ayg@aryeh.name>
Date: Thu, 19 Jan 2012 13:33:13 -0500
Message-ID: <CAKA+Axk5b8zeiR4w83TZc3NiyK9XS9jeJGA5CgZyREhHNJDYiQ@mail.gmail.com>
To: Arthur Barstow <art.barstow@nokia.com>
Cc: public-webapps-testsuite@w3.org, Anne van Kesteren <annevk@opera.com>, Ms2ger <ms2ger@gmail.com>
On Thu, Jan 19, 2012 at 12:24 PM, Arthur Barstow <art.barstow@nokia.com> wrote:
> I think the proposal for Test Facilitators to be able to submit patches to
> approved tests without an explicit RfR makes sense. After all, the Approval
> process [Approval] does include a WG-wide CfC after the testing group
> considers the test suite complete. However, to provide some transparency
> here, I think the list should be notified after patches by Test Facilitators
> or Editors are submitted to approved tests. Would that be sufficient to
> address this issue?

Interested parties can already subscribe to the feed:

http://dvcs.w3.org/hg/webapps/atom-log

For those who prefer e-mail, there are readily available tools that
will e-mail you whenever a feed updates.  If anyone is interested in
keeping track of changes to tests and this method isn't sufficient for
them, for whatever reason, I suggest they step forward and say so.  We
shouldn't require extra work from test writers or editors unless
there's demonstrated need.  It reduces the amount of time that can be
spent on clearly useful activity, like writing new spec text and
tests.

> The second issue is whether a RfR for new tests should be required if the
> submitter is a Test Facilitator or spec Editor.
>
> I can see how this is effectively "make work" if no one other than the
> Facilitator or Editor is going to actually review the tests. OTOH, I think
> there is some value in having a transparent call for review (e.g. we never
> know who may have some comments). We could relax this requirement a bit if
> we allowed a Facilitator/Editor to copy tests to the approved directory and
> then require them to send a RfR that essentially says "I just copied <insert
> test list> to the approved directory. If you have any comments, submit them
> by T+7days". Would something like this be sufficient to address this issue?

Again, interested parties can follow the feeds.  They can also just
wait until the spec becomes more mature, at which point a general CfC
on all approved tests should occur.  I think it's unlikely that anyone
really wants a constant stream of small RfRs as new tests are
submitted or existing ones are expanded or revised (there should be no
difference).  If anyone does actually want this, and the feeds aren't
enough for them, I suggest again that they step forward and say so
before we do anything for their benefit.

> The other issue is about how we should interpret silence on. This is an
> interesting question and I open to suggestions. We have two reviews to
> consider: a) a RfR by WebApps' test group for some set of tests; and b) a
> formal CfC by the entire WG for a test suite. For a), I tend to think we
> should continue to interpret silence on a RfR as a NOOP. However, for the
> later, perhaps we should consider something more rigorous. For instance,
> since we require 2 or more implementations of the spec to advance it to PR,
> perhaps we should also require at least 2 reviews for each test before the
> spec advances to PR. WDYT?

For the spec to advance to PR, each test must be passed by at least
two independent implementations.  The specs we write are meant to
match the major implementations, and we only have four major
implementations.  Any test that two out of four major implementations
pass is quite likely to be correct on that basis alone.  Any test that
the three largest implementations (IE/Gecko/WebKit) all pass is
correct almost by definition, unless they've all agreed they want to
change and haven't gotten around to it yet.  So the requirement that
two implementations pass each test already imposes significant
correctness requirements on the test suite.

Given that, I don't think we need to require that anyone check over
every test for correctness.  Implementers will naturally want to
review any tests they fail, because they want to be able to say they
conform to the standards.  So any test that a major implementation
fails is likely to get review, and any test that no major
implementation fails is almost surely correct.  I think this will be
enough to ensure correctness of tests in practice.  If there's any
specific requirement for further review, I suggest that it only be
review of test failures in major implementations, not review of all
the tests as such.
Received on Thursday, 19 January 2012 18:34:18 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 22 April 2014 14:15:59 UTC