W3C home > Mailing lists > Public > public-rdf-dawg@w3.org > January to March 2010

Re: DAWG test suite & process overview

From: Axel Polleres <axel.polleres@deri.org>
Date: Thu, 25 Mar 2010 09:25:04 +0000
Cc: "SPARQL Working Group" <public-rdf-dawg@w3.org>
Message-Id: <603F5938-B2EC-42E6-8B31-8A6499E0F42C@deri.org>
To: Lee Feigenbaum <lee@thefigtrees.net>
comments:

> ==> This was not supported in the test case vocabulary as is. The only
> SPARQL 1.0 feature that would need this would be REDUCED, I suppose. We
> handled this via (from the README):


Isn't the same true for LIMIT/OFFSET without a fully deterministic ORDER BY clause?

> * Organisation of test cases: Shall we do separate manifests per
> feature? etc.
> 
> ==> Yes, I think this makes sense.


I think we could just start with the people assigned actions to collect initial 
test cases to come up with a first version of a manifest for their "category"
and ideally maintain their test cases, or should we rather have 1-2 designated
test cases editors/maintainers?

As for the EARL tools and reports, we should check (with some deadline) if 
anyone has alternatives to offer, and decide soon what framework we use.

> ...hope this is helpful...

very much so!

Axel

On 25 Mar 2010, at 04:33, Lee Feigenbaum wrote:

> I've seen a few requests for an overview of the test setup from the
> first SPARQL working group (DAWG). I guess I know enough about it to
> volunteer.
> 
> SPARQL 1.0 tested 2 things: query & protocol.
> 
> == SPARQL 1.0 Query Testing ==
> 
> http://www.w3.org/2001/sw/DataAccess/tests/README.html gives the overall
> structure of the tests. To answer Axel's questions:
> 
> * How were the test cases collected in DAWG? Any naming conventions you
> followed?
> 
> ==> The tests were grouped into directories within
> http://www.w3.org/2001/sw/DataAccess/tests/data-r2/ . There was no
> particular naming convention followed, other than the manifest file
> within each directory being named manifest.ttl
> 
> * Did anyone do some scripts to provide the manifests for testcases in
> DAWG or were these all assembled manually?
> 
> ==> Manifests were assembled and maintained manually.
> 
> * Also, is it possible to have alternative results? (for
> non-deterministic queries, e.g. sample) in the test format of DAWG?
> 
> ==> This was not supported in the test case vocabulary as is. The only
> SPARQL 1.0 feature that would need this would be REDUCED, I suppose. We
> handled this via (from the README):
> 
> """
> Query evaluation tests that involve the REDUCED keyword have slightly
> different passing criteria. These tests are indicated in the manifest
> files with the mf:resultCardinality predicate with an object of
> mf:LaxCardinality. To pass such a test, the result set produced by a
> SPARQL implementation must contain each solution in the expected result
> set at least once and no more than the number of times that the solution
> occurs in the expected result set. (That is, the expected result set
> contains the solutions with cardinalities as they would be if the query
> did not contain REDUCED; to pass the test, an implementation must
> produce the correct results with cardinalities between one and the
> cardainlity in the expected result set.)
> """

> * Organisation of test cases: Shall we do separate manifests per
> feature? etc.
> 
> ==> Yes, I think this makes sense.
> 
> I have a perl script lying around that generates this sort of overview
> document (http://www.w3.org/2001/sw/DataAccess/tests/r2) from the super
> manifest (manifest of manifests) and the individual manifests by doing a
> few SPARQL queries.
> 
> === Results ===
> 
> We collected results via EARL as documented here:
> 
> http://www.w3.org/2001/sw/DataAccess/tests/earl
> 
> We did _not_ provide a harness to run the tests or generate results.
> Implementers provided their own results.
> 
> We had a tool chain courtesy of EricP that parsed the EARL and populated
> -- I think -- a mysql DB with the results, which was then used to
> generate the output reports at
> 
>    http://www.w3.org/2001/sw/DataAccess/impl-report-ql
>    http://www.w3.org/2001/sw/DataAccess/tests/implementations
> 
> At one point in my life I think I knew how to use this tool chain, but
> I'll have to work to resuscitate that knowledge if we choose to use it.
> It relies on tests being associated with 'facets' being tested, in order
> to assign a score for how complete each implementation is for each facet.
> 
> == SPARQL 1.0 Protocol Testing ==
> 
> Elias Torres created a harness for performing the protocol tests. If I
> recall correctly, the tests were mainly based on the examples in the
> protocol specification. The tools are here:
> 
> http://www.w3.org/2001/sw/DataAccess/proto-tests/tools/
> 
> ...but I don't remember much about how to use them. Basically, we
> pointed the tools at a SPARQL endpoint and it spit out result files
> which -- again, I think -- we manually compiled into the implementation
> report here:
> 
> http://www.w3.org/2001/sw/DataAccess/impl-report-protocol
> 
> 
> 
> ...hope this is helpful...
> 
> Lee
> 
> 
> 
Received on Thursday, 25 March 2010 09:25:45 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 16:15:42 GMT