W3C home > Mailing lists > Public > public-lod@w3.org > June 2011

Re: Think before you write Semantic Web crawlers

From: Andreas Harth <andreas@harth.org>
Date: Wed, 22 Jun 2011 21:29:27 +0200
Message-ID: <4E024297.7070908@harth.org>
To: Martin Hepp <martin.hepp@ebusiness-unibw.org>
CC: Christopher Gutteridge <cjg@ecs.soton.ac.uk>, Daniel Herzig <herzig@kit.edu>, semantic-web@w3.org, public-lod@w3.org
Hi Martin,

On 06/22/2011 09:08 PM, Martin Hepp wrote:
> Please make a survey among typical Web site owners on how many of them have
>
> 1. access to this level of server configuration and
 > 2. the skills necessary to implement these recommendations.

d'accord .

But the case we're discussing there's also:

3. publishes millions of pages

I am glad you brought up the issue, as there are several data providers
out there (some with quite prominent names) with hundreds of millions of
triples, but unable to sustain lookups every couple of seconds or so.

I am very much in favour of amateur web enthusiasts (I would like to claim
I've started as one).  Unfortunately, you get them on both ends, publishers
and consumers.  Postel's law applies to both, I guess.

Best regards,
Andreas.
Received on Wednesday, 22 June 2011 19:29:51 UTC

This archive was generated by hypermail 2.4.0 : Thursday, 24 March 2022 20:29:54 UTC