Re: CR exit criteria and features at risk for HTML5

On Fri, Aug 17, 2012 at 7:20 PM, Boris Zbarsky <> wrote:
> I'm not sure what that has to do with what I said.
> The point of the two implementations requirement is to make sure the spec is
> in fact implementable as written.
> If it's implementable standalone but not as part of the overall web
> platform, that's not very helpful.

The point is I'd be happy with no requirement for interoperability at
all to reach REC, because I'm only concerned about IPR requirements.
So I'd be fine with a REC with zero implementations.  Thus I'm
certainly fine with two interoperable implementations that aren't
compatible with web content -- that's no worse than none.

Realistically, few features are implemented in anything but browsers
anyway, so I'm also fine with the non-experimental proviso as long as
publicly-available browser preview editions are considered

On Fri, Aug 17, 2012 at 9:39 PM, L. David Baron <> wrote:
> Are those instructions sufficient to get a test into the "Approved
> Tests" list? [2]  Or is the "Approved Tests" subset not a relevant
> subset?

The procedure in the HTML WG to get a test approved is to post an RfC
to the mailing list.  It's then automatically approved if there are no
unaddressed objections after a while (generally a month or so, I
think).  In particular, AFAIK, it is not required that anyone actually
reviews the test.

On Fri, Aug 17, 2012 at 11:16 PM, Maciej Stachowiak <> wrote:
> The strict criteria have a pretty specific definition of "not experimental". They *do* allow betas, nightlies, developer previews, and other such versions that are not yet released to all users. They *do not* allow versions created solely for purposes of passing the test, and which are not actually intended to ever be part of software that genuinely ships. In other words, it's meant to rule out a completely artificial implementation that is not meant to successfully browse the web at acceptable quality. Basing interop claims on such implementations would mean that we have not shown implementing the spec is feasible for a real product.

That makes sense to me.

On Fri, Aug 17, 2012 at 11:51 PM, James Graham <> wrote:
> My opinion is that the approval system we have has not worked. Where we have
> imported tests at Opera we typically run them just from the submitted
> directory because that contains far more tests, and by actually running them
> we are more likely to find problems than by simply inspecting them (or
> waiting for others to do so). In the long term I think we should remove the
> submitted/ and approved/ directories altogether; the filesystem structure is
> the wrong place to store review metadata. Of course I still think that code
> review is valuable, but I would mcuh rather integrate with the VCS and say,
> for example, that specific commits, or (file, commit) tuples have been
> reviewed.

For the record, Mozilla also uses non-approved tests in its regression
suite.  For regression testing, it doesn't really matter if the tests
are correct -- you still want to know if your behavior changes.  If
the change is correct and the test is what's bogus, you can figure
that out at the time the test fails, and mark the new failure as

If people are going to gauge conformance to the standard based on the
test suite, however, we do need some type of approval procedure.  I'm
not certain how worthwhile that is, though.

Received on Sunday, 19 August 2012 09:12:48 UTC