W3C home > Mailing lists > Public > public-mobileok-checker@w3.org > February 2007

RE: Some draft code for mobileOK Basic Tests RI

From: Dominique Hazael-Massieux <dom@w3.org>
Date: Tue, 06 Feb 2007 13:45:13 +0100
To: James G Pearce <jpearce@mtld.mobi>
Cc: Sean Owen <srowen@google.com>, public-mobileok-checker@w3.org
Message-Id: <1170765913.4246.208.camel@cumulustier>

Le lundi 05 février 2007 à 09:57 -0500, James G Pearce a écrit :
> One idea we had was to try to aim to describe as many of the tests as
> possible in a language-agnostic way. The test descriptions could be
> picked up at run time. This would have a number of advantages:
>  * Easier porting of the engine (it's just a test interpreter)
>  * Decoupling the implementation from any changes made to the tests
> themselves
>  * Makes it easy for 3rd parties to add additional tests (at least
> those which can be described in that way).

I think that's a good approach, but I think it's fairly clear it won't
be applicable to all tests; for instance, I don't think we can easily
use a declarative approach for describing constraints on CSS rules
(which unfortunately require more work than simply detecting whether a
given rule exists, but also whether and how it applies). Similarly, I
don't think we can detect whether a GIF or JPEG file is valid or not, or
whether the encoding of a page is properly encoded or not.

> As for the way to describe the tests, there are probably plenty of
> common approaches. But it basically would need to be a query that is
> applied to a possibly-XML document (and its headers, and its
> dependencies) and which returns a tri-state: passed/failed/not-run.

It should also include warnings, given that mobileOK defines a fair set
of them.

I don't know whether it should include any well-known accompanying error
messages or not; error messages aren't probably a good idea, but at
least a set of well-known error codes that can later on be bound to
error messages.

> A nice side-effect of this sort of meta-document is that you could
> start attaching the test results to the top too, and then it goes on
> to double up as the output of the checker as a whole, complete with
> audit trail.
> But it's the approach itself I'm most interested in socialising at the
> moment. Any thoughts?

Hmm... I guess the approach you're suggesting would mean creating two
piece of code: one to transform the input into the appropriate XML file
that would contain most of the data and some of the pre-analysis made on
it, and a test harness to run the on the resulting XML file.

I certainly like the idea, but I think the portability of the first
piece of the code will be much lower than the second one.

Received on Tuesday, 6 February 2007 12:46:39 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:21:17 UTC