W3C home > Mailing lists > Public > www-rdf-interest@w3.org > March 2002

Re: Eep RDF API, Inference Engine, and NTriples/N3 Parser

From: Sean B. Palmer <sean@mysterylights.com>
Date: Mon, 4 Mar 2002 20:18:25 -0000
Message-ID: <015b01c1c3b9$bf3d0d60$b9570150@localhost>
To: "Dan Brickley" <danbri@w3.org>
Cc: <www-rdf-interest@w3.org>
> Nice work :)

Thanks.

> Would it make sense to have a common set of tests that could
> be run against Eep, Cwm and similar tools [...]

Absolutely! If these sorts of tools are to be produced more often,
then we certainly need a standard test suite, i.e. one that's been
made and agreed upon by the various people working on these tools
(Tim, DanC, Jos, Bijan, et al.). It would also be very helpful if
someone wrote up a general methodology for the various API layers
involved - from parsing to inference. Bijan and I have discussed some
of these things on publically archived channels, but it would be
better if there were some central organized collection somewhere.

On the other hand, someone is going to have to put the work in, and
writing about it is much more tedious than actually doing it.

Here's the general overview of what we need tests for, and what needs
to be explained:-

* NTriples parsing. This is actually tremendously easy, so the method
isn't all that important, but difficult test cases would be nice. I
have some local tests that I should upload to the Web.
* Various inter-NTriples-Notation3 test cases. When you're building a
Notation3 parser, sometimes you don't want to have to implement
absolutely everything that N3 has to offer... so we need test cases
that are in various "dumbed down" levels of Notation3 - from NTriples
to NTriples with prefixes, adding more and more bits - new lines,
bNodes, lists, and contexts.
* Full Notation3 descriptions and test cases. There just aren't
enough, and as a result, there's no general agreement on the
specification of N3 between the tools. cf.
http://infomesh.net/2002/n3qname.html Then again, some of the charm of
Notation3 is that it was produced as a hack, and that the tools are
just a rough consensus. Tests couldn't hurt, though.
* Querying. Test cases and approaches would be very helpful in this
area. Little bits of advice like "don't use a cartesian product driven
queries, unless you want your CPU to melt". Ahem.
* Inference. Inference is actually very easy - but you have to make
sure that you preserve a list of the bindings from a query, otherwise
it's rather difficult. General API advice, like having to generate
bNodes, how to handle them in queries (tip: query like univars, treat
binding as if they are just URIs), and so forth comes in handy at this
stage. There are quite a few issues to do with quantification etc.
Also, really solid test cases, and speed tests are useful in this
area.
* Builtins. Builtins are the fun bit - the method outlines in Llyn is
to make a queue of the builtins, and gradually resolve them until
they're either all resolved, or you can't resolve any more (bork!
bork!). Bijan and I chatted about this on #rdfig.

Cheers,

--
Kindest Regards,
Sean B. Palmer
@prefix : <http://purl.org/net/swn#> .
:Sean :homepage <http://purl.org/net/sbp/> .
Received on Monday, 4 March 2002 15:18:14 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 7 December 2009 10:51:53 GMT