Re: RDF 1.1 Semantics Implementation Report on the Swertia RDF-Based Reasoner

Dear Working Group!

Today, I wanted to do all the testing for my reasoner
and was ready to spend half the day with this activity.
Unfortunately, I found that this is not as easy as I
thought.

The procedure would be to first parse a "manifest.ttl" file
into an RDF graph, get all the metadata and links to test
files from it, find out the sort of test case
(positive/negative entailment/satisfiability test),
call the reasoner with pointers to the files from the
manifest, telling the file type (yes, the test files
really exist in different serialization formats!),
collecting the result value, of course, checking for
processing errors or timeouts, and finally building
up the singular EARL RDF graphs and the complete result
graph with the project-specific metadata.

Now, that's a lot of stuff to implement (and to understand
in the first place), and most of it is totally generic for
any reasoning tool used with the RDF 1.1 test suite,
and much of it, such as the particular manifest file or EARL
format, should not even be of interest to the tool provider.
What I would like to have is a tool that does all the testing
and just calls the reasoner with a defined input/output
behaviour, such that all I have to implement additionally
is a thin wrapper around my reasoner that handles this
input/output behaviour.

Is there such a tool, and I have only missed this?

The proposed I/O protocol for the reasoner 
(wrapped-into-a-command-line-tool) would be:

   Input parameters:
   * reasoning mode: SATISFIABILITYCHECK | ENTAILMENTCHECK
   * input file name 1
   * input file serialization 1
   * input file name 2 (only for entailment checks)
   * input file serialization 2 (only for entailment checks)

   Output (on stdout): TRUE | FALSE | ERROR

The testing tool itself would be responsible for checking for
timeouts and unhandled errors (other than those indicated
by "ERROR" by the tool itself), and would be called with
following input parameters:

   * project metadata file name (see below)
   * testdata folder name (which includes the "manifest.ttl" file)
   * output file name
   * timeout (in seconds)
   * reasoner file name (the executable)

The project metadata file would have to be created by the
reasoner provider and would contain the fixed metadata
from which the result EARL file is being created, both the
metadata for the whole test run as well as the fixed metadata
for each singular test result. This metadata includes
the project URI, developer, etc. The appropriate format
may be RDF or a key=value format, or whatever.

If no such tool exists or is being created by the WG,
I have to do this myself, which will delay things further,
I'm afraid.

Regards,
Michael

Am 04.12.2013 18:25, schrieb Peter F. Patel-Schneider:
> Hi Michael
>
> The RDF 1.1 entailment tests have been changed a bit, to conform better
> to what RDF systems to.
>
> When you run the tests, could you make sure that you have an up to date
> set?
>
> Thanks,
>
> peter
>
> On 12/02/2013 03:20 PM, Michael Schneider wrote:
>> Dear Working Group,
>>
>> please find below my implementation report for my experimental Swertia
>> RDF-Based Reasoner, a system that tries to be a close implementation
>> of the model-theoretic semantics of RDF (unlike the many existing
>> systems that are more based on the RDF entailment rules). I still
>> wasn't able to run the official RDF 1.1 tests, due to lack of time. I
>> also believe that the result for the test suite will not become very
>> good, as many of the tests are about datatype reasoning, which is not
>> supported by my system. Anyway, I still plan to run the tests, as soon
>> I find the time, and also plan to provide the results and the
>> prototypical system, but for now I provide you with my implementation
>> experiences only. I hope this will already be useful for the Working
>> Group.
>>
>> Best regards,
>> Michael
>

Received on Sunday, 8 December 2013 20:55:02 UTC