- From: Jeremy Carroll <jjc@hplb.hpl.hp.com>
- Date: Mon, 17 Sep 2001 17:33:49 +0100
- To: w3c-rdfcore-wg@w3.org
Brian: > Your noting is noted. Did you have a specific suggestion in mind? Thank-you, ... The current test cases have a regular format that is easy to automate tests against: look for an rdf file, and an nt file and match them up. Or look for an error file and check you fail it. If we were to extend the test cases with different paradigms I would suggest: + the paradigm should be clear. + there should be no need to read a readme to understand a particular test + the different paradigms should be different top-level directories. e.g. we copy all the current tests into a directory "syntax" leaving the internal structure unchanged, The paradigm syntax supports error*.rdf and test*.{rdf,nt} tests. We could then have a paradigm "entailment" each test could consist of a directory with two sub-directories "premises" and "conclusions". The "premises" sub-directory would then include the axioms file as well as others. Personally I think I would prefer that each test in the "entailment" paradigm used either RDF/XML or N-triples for its facts (not both). No point in confusing a test for one thing with a test for another. We can still have a readme explaining the paradigm, just we expect to have more than one test in the same paradigm, and the developer having chosen to run tests of this paradigm only needs to write one lot of code. Another paradigm I proposed earlier was "equality" where each test consisted of two or more RDF/XML files that contain the same model. I saw this as useful for testing xml:lang, which does not occur in N-triples. Jeremy
Received on Monday, 17 September 2001 12:29:16 UTC