W3C home > Mailing lists > Public > public-annotation@w3.org > April 2016

Testing the Data Model

From: Shane McCarron <shane@spec-ops.io>
Date: Fri, 22 Apr 2016 12:05:54 -0500
Message-ID: <CAJdbnOAgvetOGSZY=2TgfMY8iQH7bq6tj5DpGij33vw3fc8xCg@mail.gmail.com>
To: W3C Public Annotation List <public-annotation@w3.org>, testdev@spec-ops.io
(CCing the Spec-Ops testdev mailing list)

In the Web Annotation meeting today Doug touched on something important.
Apologies that I didn't follow up on it at the time.  Doug mentioned that
there are other ways of testing Data Models / grammars.  This is actually
pretty important, and might help us focus this effort.

A little background.  The W3C develops a number of different types of
Recommendations.  You might divide these into "protocol", "grammar", and
"user agent".  These things are all part of the Web Platform.  When you
become a Candidate Recommendation at the W3C, the criteria for exiting
Candidate status include having the features of your Recommendation
supported by at least two implementations.  In that context, testing
protocols is well understood.  Testing user agent behavior is also well
understood.  Testing grammars?  Not so much.

(For purposes of this email, let's pretend that a data model is just a
special case of a grammar.)

What does it mean to have an "implementation" of a grammar? Arguably, the
"implementation" of a grammar is its expression in a meta grammar.  And it
is "implemented" by the working group.  In this case, the real test then is
whether that "implementation" is correct, and whether it can be consumed by
tools that process such a meta grammar.

So, in the case of Web Annotation, you have a data model that is expressed
in prose, with a context defined in JSON-LD and (potentially) a definition
in JSON Schema.  So, one thing we could consider for CR exit criteria is to
have tests that verify the implementation of the grammar adheres to the
constraints in the prose PLUS verification that a set of sample data files
(the examples from the spec) were able to be validated using the
implementation by multiple tools that support JSON-LD / JSON Schema
validation.

This, I think, is what Doug was trying to get it. We don't NEED to take the
output of real clients and ensure that they generate output that conforms
to the Data Model (unless we define user agent conformance criteria).  We
need to prove that the Data Model is complete. that its definition is well
formed (compiles/is parseable), and that it works.

So, we should consider whether there is any value in going through the
effort of instrumenting the tests so that it is even possible to collect
output from clients and evaluate it.  It *should* be sufficient to
demonstrate that the Data Model works and that all of the types of
client-generated output can be validated against it.  And we can absolutely
do this sort of testing within the context of the Web Platform Tests (WPT).

FWIW this is exactly what we did with XHTML Modularization many years ago.
It was implemented in XML DTD and XML Schema.  We ensured that those
implementations were consumable by popular commercial and free tools that
did validation using DTD and Schema.  We also showed that there were
multiple independent markup languages that were developed by groups within
and outside of the W3C that used the modules.  That was sufficient to
satisfy the Director and exit CR.

-- 
Shane McCarron
Projects Manager, Spec-Ops
Received on Friday, 22 April 2016 17:06:48 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 18:54:45 UTC