W3C home > Mailing lists > Public > public-lod@w3.org > November 2008

Re: Size matters -- How big is the danged thing

From: Yves Raimond <yves.raimond@gmail.com>
Date: Sun, 23 Nov 2008 00:59:04 +0000
Message-ID: <82593ac00811221659u4d6ebea1u398cbac351a46c10@mail.gmail.com>
To: "Richard Cyganiak" <richard@cyganiak.de>
Cc: "Giovanni Tummarello" <giovanni.tummarello@deri.org>, "Jim Hendler" <hendler@cs.rpi.edu>, "Michael Hausenblas" <michael.hausenblas@deri.org>, public-lod@w3.org


On Sat, Nov 22, 2008 at 4:11 PM, Richard Cyganiak <richard@cyganiak.de> wrote:
> Yves,
> On 21 Nov 2008, at 22:30, Yves Raimond wrote:
>> On Fri, Nov 21, 2008 at 8:08 PM, Giovanni Tummarello
>> <giovanni.tummarello@deri.org> wrote:
>>> IMO considering myspace 12 billion triples as part of LOD, is quite a
>>> stretch (same with other wrappers) unless they are provided by the
>>> entity itself (E.g. i WOULD count in livejournal foaf file on the
>>> other hand, ok they're not linked but they're not less useful than the
>>> myspace wrapper are they? (in fact they are linked quite well if you
>>> use the google social API)
>> Actually, I don't think I can agree with that. Whether we want it or
>> not, most of the data we publish (all of it, apart from specific cases
>> e.g. review) is provided by wrappers of some sort, e.g. Virtuoso, D2R,
>> P2R, web services wrapper etc. Hence, it makes not sense trying to
>> distinguish datasets on the basis they're published through a
>> "wrapper" or not.
>> Within LOD, we only segregate datasets for inclusion in the diagram on
>> the basis they are published according to linked data principles. The
>> stats I sent reflect just that: some stats about the datasets
>> currently in the diagram.
>> The origin of the data shouldn't matter. The fact that it is published
>> according to linked data principles and linked to at least one dataset
>> in the cloud should matter.
> I think this view is too simplistic.
> I think what Giovanni and others mean when they try to distinguish
> "wrappers" from other kinds of LOD sites is not about the implementation
> technology. It's not about wether the data comes from a triple store or
> RDBMS or flat files or REST APIs or whatever.
> It's about licenses and rights.
> If I wrap an information service provided by a third party into a linked
> data interface, then I should better watch out that the terms of service
> permit this, and that no copyright laws are violated.
> There are some sites in the LOD cloud that, as far as I can tell, violate
> the TOS of the originating service. The MySpace wrapper and the RDF Book
> Mashup are maybe the clearest examples. Others are in the grey area.
> This is always an issue when party A wraps a service provided by party B. I
> think it's reasonable to treat all these datasets with extra caution, unless
> A has provided a clear argument and documentation to the effect that B'a
> license permits this kind of service.

Richard, I certainly agree with all you just mentioned. But Jim's
question was: "what is the size of the datasets in the current LOD
diagram", and I gave some stats about some of them - simple question,
simple (but partial) answer :-) I am not questioning whether the
licensing is all clear for every single dataset depicted in the
diagram, and whether it was right to include them in the first place.
Most of them are still within a "grey area", and licensing is an
extremely tricky problem, as we all know.


> Best,
> Richard
>>> Giovanni
Received on Sunday, 23 November 2008 00:59:39 UTC

This archive was generated by hypermail 2.4.0 : Friday, 17 January 2020 16:20:43 UTC