Re: CR exit criteria and features at risk for HTML5

On Thu, Aug 16, 2012 at 3:21 PM, Benjamin Hawkes-Lewis
<bhawkeslewis@googlemail.com> wrote:
> Naive question to the floor from a non-lawyer. As I understand it,
> contributors to a royalty-free specification give up the right to make
> claims that the patents they hold are violated by implementing the
> normative requirements of the specification. Presumably if there are
> no implementations of a normative requirement that makes it harder to
> tell which of their patents might be violated and for contributors to
> estimate the cost of giving up their patent rights? Is this one of the
> motivations of the "two implementations" requirement? What is the
> minimum level of implementation needed to make judgements about likely
> patent violation? Could we have a more lightweight process for
> royalty-freeing bits or snapshots of spec?

W3C Community Groups have such a process, but they don't offer the
same protections.  About the rest, I don't know.  But my understanding
is that the patent grant is only with respect to implementing the
specification, so if the specification weren't implemented, I don't
see how the grant would be effective.

> Some other audiences that needs consideration, but aren't mentioned in
> your feedback, are authors, people writing guidance for authors, and
> other spec writers (EPUB etc.). Information about the interoperable
> implementation status of features is critical for that audience. These
> audiences likely naively assume that features in RECs should work. If
> we push HTML to REC without implementations, I think we need to warn
> those audiences that the presence of features in the REC is no
> guarantee that they work!

It's always been the case that RECs may or may not work.  The WHATWG
version of the spec does have implementation status annotations, which
are probably not much less reliable than W3C stability designations
for telling actual implementation status.

Realistically, people writing advice for authors need to test features
in practice to give good advice.  For instance, Mark Pilgrim's Dive
Into HTML5 includes (included?) detailed info on what browsers support
what exact features, including things like codec support that are
deliberately undefined by the standard.

> HTML-Next/Living Standard and the linter need to do a better job at
> highlighting implementation status. I agree adoption of a common test
> format could help provide better information here. Does Mozilla have a
> test harness for running the HTML test suite?

Thanks to Ms2ger, we do have a framework for importing tests in the
testharness.js format, and running them as part of our regression
suite on every push (checkin):

http://hg.mozilla.org/mozilla-central/file/50e4ff05741e/dom/imptests

Currently the only parts of the HTML test suite we import are the
submissions by Mozilla, and Opera's submitted microdata tests.  I
don't know why we don't run the whole test suite, but we could do so
easily, for tests in testharness.js format.  We already run the entire
editing test suite, all approved DOM Core tests, etc.

It might just be that there aren't properly-formatted manifests in all
the directories that tell our test import system what's a test file
vs. support file, etc.  It should be easy to add support for all
tests.  Even if they're not approved, or even if they're known to be
wrong, it would be good for us to know if we suddenly start newly
passing or failing one of them.

Received on Thursday, 16 August 2012 12:51:16 UTC