W3C home > Mailing lists > Public > semantic-web@w3.org > February 2014

Re: Billion Triples Challenge Crawl 2014

From: Tim Berners-Lee <timbl@w3.org>
Date: Sun, 16 Feb 2014 08:31:45 +0200
Cc: Andreas Harth <andreas@harth.org>, SWIG Web <semantic-web@w3.org>
Message-Id: <B05AFB67-3AA2-46E6-AA47-34A0E30B59E7@w3.org>
To: Michel Dumontier <michel.dumontier@gmail.com>
On 2014-02 -14, at 09:46, Michel Dumontier wrote:

> Andreas,
>  I'd like to help by getting bio2rdf data into the crawl, really. but we gzip all of our files, and they are in n-quads format.
> http://download.bio2rdf.org/release/3/
> think you can add gzip/bzip2 support ?
> m.
> Michel Dumontier
> Associate Professor of Medicine (Biomedical Informatics), Stanford University
> Chair, W3C Semantic Web for Health Care and the Life Sciences Interest Group
> http://dumontierlab.com

An on 2014-02 -15, at 18:00, Hugh Glaser wrote:

> Hi Andreas and Tobias.
> Good luck!
> Actually, I think essentially ignoring dumps and doing a “real” crawl, is a feature, rather than a bug.


Agree with High. I would encourage you unzip the data files on your own servers 
so the URIs will work and your data is really Linked Data.
There are lots of advantages to the community to be compatible.

Received on Sunday, 16 February 2014 09:05:45 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 07:42:48 UTC