Re: rdb2rdf TC

Richard
On 04/10/2011 15:24, Richard Cyganiak wrote:
> Boris,
>
> On 4 Oct 2011, at 01:27, Boris Villazón Terrazas wrote:
>> 1. how do I tell if the test cases cover all the functionalities? (Souri refer to this as coverage completeness).
> …
>> - R2RML features are described in the R2RML specification
>> · logical tables
>> · R2RML views
>> · typed literals
>> · inverse expressions
>> · ....
>> These features are reflected in the "Specification Reference" property of the TCs.
>> So, the R2RML test cases should include at least one TC for a particular R2RML feature.
> Ideally, we'd go through the spec and write a test case for every testable assertion. To pick a random paragraph from the R2RML spec:
>
> [[
> A term map with a term type of rr:Literal may have a specified language tag. It is represented by the rr:language property on a term map. If present, its value must be a valid language tag.
> ]]
>
> That's three test cases:
>
> 1. A term map without rr:language property is allowed.
> 2. A term map with rr:language where the value isn't a valid language tag is an error.
> 3. A term map with rr:language where the value *is* a language tag is allowed.
>
> In practice, the first case may be unnecessary because it's already covered by other test cases elsewhere. The second case is perhaps “time permitting”. The third is the important one.
>
> Doing it this way would result in a lot of small test cases, which is a Good Thing IMO.
>
> I'd suggest to systematically go through the normative parts of the spec and identify required test cases.
Thanks!

I was preparing a kind of matrix for this [1].
I'll follow your suggestion and focus on more specific parts of the 
document.

[1] http://www.w3.org/2001/sw/rdb2rdf/wiki/R2RML_TC
>> 2. if you if you have an implementation that pass all the test cases, how do we keep a record of that?
>> I think the first basic approach for this purpose can be that implementors should
>> 1. download the test suite
>> 2. run the test cases locally
>> 3a. upload the results to the TC server
>> 3b. download a script from the TC server
>> 4a. the server analyzes the results
>> 4b. execute the script for analyzing the results
>> 5a. the server generates a report of the implementation
>> 5b. the script generates a report of the implementation
>> 6. the report generated is stored in the server for further analysis/comparison
> I missed a lot of telecon discussion during the last weeks, so I'm probably not up to speed here. What's the TC server?
Sorry, I didn't explain myself in good way.
I was referring to W3C server, where we store the TCs. Here we have the 
options we were discussing while ago [2].

[2] http://www.w3.org/2001/sw/rdb2rdf/wiki/TestHarness
> Are you planning to provide a test driver that executes the test cases?
I'm not sure if I have time to provide a test driver, ideally this is 
the best option.
Probably, at this stage each implementation should execute the Tests, 
and then our script would analyze the results.
> How many reports do you expect to receive?
I think only one report for each implementation.


Boris
>
> Best,
> Richard
>

Received on Tuesday, 4 October 2011 15:26:26 UTC