Re: How to obtain the latest version of SPARQL 1.1 test cases?

Sandro, that would be great. Phil had assigned that to Eric some time ago, but it never happened :(.

Basically. the SPARQL test suite and RDF test suites, along with their implementation reports, could all be replaced by what’s in github.com/w3c/rdf-tests <http://github.com/w3c/rdf-tests>, although they could probably simply be redirected to the new locations. I’ll forward you some email chains from 2015 directly.

Gregg Kellogg
gregg@greggkellogg.net

> On Nov 13, 2017, at 2:45 PM, Sandro Hawke <sandro@w3.org> wrote:
> 
> Sounds like someone (me?) should put a note to that effect in a bunch of other places. Anytime care to enumerate all the wrong places to look?
> 
> - Sandro
> 
> On November 13, 2017 5:32:14 PM EST, Gregg Kellogg <gregg@greggkellogg.net> wrote:
> We created a Community Group to currate RDF and SPARQL tests. The latest tests can be found at http://w3c.github.io/rdf-tests/ <http://w3c.github.io/rdf-tests/>.
> 
> Gregg Kellogg
> gregg@greggkellogg.net <mailto:gregg@greggkellogg.net>
> 
>> On Nov 12, 2017, at 3:25 AM, Wouter Beek <wouter@triply.cc <mailto:wouter@triply.cc>> wrote:
>> 
>> Hi,
>> 
>> What is the best way to obtain the latest version of the SPARQL 1.1 test cases?  The main website [1] includes a link to an archive file [2] that upon first glance seems to contain everything.  Looking a little bit deeper into this, it turns out that the content of the website [1] and the content in the archive file [2] are not completely the same.  For example, [3] includes RIF test cases that are not in the archive.
>> 
>> [1] https://www.w3.org/2009/sparql/docs/tests/ <https://www.w3.org/2009/sparql/docs/tests/>
>> [2] https://www.w3.org/2009/sparql/docs/tests/sparql11-test-suite-20121023.tar.gz <https://www.w3.org/2009/sparql/docs/tests/sparql11-test-suite-20121023.tar.gz>
>> [3] https://www.w3.org/2009/sparql/docs/tests/data-sparql11/entailment/ <https://www.w3.org/2009/sparql/docs/tests/data-sparql11/entailment/>
>> 
>> In case the site [1] contains the newest version and the archive [2] is outdated, what would be the best way to scrape the newest content from the web site?  I have tried to do this with the following command, but standard download tools are unable to follow links from Turtle-formatted manifest files, so I do not get all the content that is on the site and that is linked to from some manifest file:
>> 
>> ```
>> $ wget --recursive --page-requisites --convert-links --no-parent https://www.w3.org/2009/sparql/docs/tests/ <https://www.w3.org/2009/sparql/docs/tests/>
>> ```
>> 
>> Thanks for making this collection of test cases available.  This helps developers a lot.
>> 
>> ---
>> Cheers,
>> Wouter Beek.
>> 
>> Email: wouter@triply.cc <mailto:wouter@triply.cc>
>> WWW: http://triply.cc <http://triply.cc/>
>> Tel: +31647674624 <tel:%2B31647674624>
>> 
> 

Received on Wednesday, 15 November 2017 01:23:11 UTC