- From: Richard Cyganiak <richard@cyganiak.de>
- Date: Tue, 4 Oct 2011 14:24:32 +0100
- To: bvillazon@fi.upm.es
- Cc: rdb2RDF WG <public-rdb2rdf-wg@w3.org>
Boris, On 4 Oct 2011, at 01:27, Boris Villazón Terrazas wrote: > 1. how do I tell if the test cases cover all the functionalities? (Souri refer to this as coverage completeness). … > - R2RML features are described in the R2RML specification > · logical tables > · R2RML views > · typed literals > · inverse expressions > · .... > These features are reflected in the "Specification Reference" property of the TCs. > So, the R2RML test cases should include at least one TC for a particular R2RML feature. Ideally, we'd go through the spec and write a test case for every testable assertion. To pick a random paragraph from the R2RML spec: [[ A term map with a term type of rr:Literal may have a specified language tag. It is represented by the rr:language property on a term map. If present, its value must be a valid language tag. ]] That's three test cases: 1. A term map without rr:language property is allowed. 2. A term map with rr:language where the value isn't a valid language tag is an error. 3. A term map with rr:language where the value *is* a language tag is allowed. In practice, the first case may be unnecessary because it's already covered by other test cases elsewhere. The second case is perhaps “time permitting”. The third is the important one. Doing it this way would result in a lot of small test cases, which is a Good Thing IMO. I'd suggest to systematically go through the normative parts of the spec and identify required test cases. > 2. if you if you have an implementation that pass all the test cases, how do we keep a record of that? > I think the first basic approach for this purpose can be that implementors should > 1. download the test suite > 2. run the test cases locally > 3a. upload the results to the TC server > 3b. download a script from the TC server > 4a. the server analyzes the results > 4b. execute the script for analyzing the results > 5a. the server generates a report of the implementation > 5b. the script generates a report of the implementation > 6. the report generated is stored in the server for further analysis/comparison I missed a lot of telecon discussion during the last weeks, so I'm probably not up to speed here. What's the TC server? Are you planning to provide a test driver that executes the test cases? How many reports do you expect to receive? Best, Richard
Received on Tuesday, 4 October 2011 13:25:09 UTC