An n3 test suite

>
>
>
>* CWM
>* "dajobe's Turtle stuff"
>* Graham Klyne's Swish:
>   http://www.ninebynine.org/RDFNotes/Swish/Intro.html
>   http://www.ninebynine.org/Software/HaskellRDF/RDF/Graph/N3Parser.hs
>* http://eulersharp.sourceforge.net/ (JosD)
>* http://www.mindswap.org/~katz/pychinko/ <http://www.mindswap.org/%7Ekatz/pychinko/> (by Yarden Katz, aka jordan,
>   and the mindswap folk, but based on sbp's afon)
>* http://www.wiwiss.fu-berlin.de/suhl/bizer/rdfapi/ PHP, using a port
>   of one of sbp's old Notation3 parsers (not afon)
>* AndyS's grammar for ANTLR: <http://cvs.sourceforge.net/viewcvs.py/
>   jena/jena2/src/com/hp/hpl/jena/n3/n3.g?rev=1.14&view=log>
>
>And, of course, Jos De Roo's amazing Javascript Notation3 work:
>http://cvs.sourceforge.net/viewcvs.py/eulermoz/eulermoz/js/parser/n3/
>http://cvs.sourceforge.net/viewcvs.py/eulermoz/eulermoz/rdfinf/
>
>DanC notes that an IG Note seems worthwhile, but after discussing
>somewhat whether any consensus has been reached on whether to create a
>specification or a test suite, or even, as TimBL's asks, what the
>difference is, sbp notes that there is "no emergent consensus".
>
>DanC, sbp, JosD, and gk all agree that a centralised or distributed
>test suite for Notation3 (presumably independent of the SWAP tools)
>would be a worthy objective.
>
>  
>
I've been working on a test suite for n3. It is certainly very dependant 
on the SWAP tools. Everything but the test files lists is located at 
http://www.w3.org/2000/10/swap/test/n3/ . In particular, there is 
http://www.w3.org/2000/10/swap/test/n3/test_results.html , a table of 
every n3 parser I can figure out how to run on some test files. It needs 
more test files, and better characterizations of the ones it has. I 
haven't figured out how to run Swish, or Eulermoz.

The files in that directory are:
commandList       a file containing the commands to run (on my machine) 
to parse an n3 file with different parsers
tester.py               a program that, given the commandList and some 
n3 files listing the tests to run, makes an n3 file describing the results
makeReport.n3    a files that, when run with cwm --think --strings with 
the output of tester.py, makes an html file
test_results.n3      the output of me running tester.py
test_results.html   the html file made from test_results.n3
Makefile               A way to automate the above steps a little
a bunch of .nt files     reference outputs of the cwm parser tests. 
These were made by cwm, so don't trust them.


Yosi Scharf

Received on Friday, 19 November 2004 16:25:02 UTC