W3C home > Mailing lists > Public > public-lod@w3.org > March 2009

Re: LDOW2009 Workshop now publishing Linked Data (was Re: Linked Data on the Web (LDOW2009) workshop papers online.)

From: Tom Heath <tom.heath@talis.com>
Date: Fri, 20 Mar 2009 19:38:31 +0000
Message-ID: <89f622f10903201238o7acbc7d7y76fe54d86dd8a363@mail.gmail.com>
To: giovanni.tummarello@deri.org
Cc: Knud Hinnerk Möller <knud.moeller@deri.org>, public-lod@w3.org
2009/3/19 Giovanni Tummarello <g.tummarello@gmail.com>:
> The only reason to mint resolvable URIs is to allow fetching of a description
>
> i'd say that minting in other's people spaces is really calling for
> troubles and should be discouraged? one should, could, possibly put
> "sameas" if some URI exists somewhere else.

Giovanni, relax :) We all know that we shouldn't arbitrarily mint URIs
in namespaces we don't control, which is why I'm checking this out
with Knud, who attached the condition that we ensure the data is
actually uploaded in the end.

> honestly? i dont even see the reasonw hy to copy this data on dogfood!
> the entire point about semanti cweb is to be able to use data that's
> distributed.

Agreed, although I do think there are some advantages to replicating
the data at the dogfood server, some of which have been outlined by
Knud and Daniel. Think of the dogfood server as a cache (would the
issue disappear if Knud crawled the data and loaded it into the store
compared to be sending him a dump?), or a domain specific semantic
browser ;)

Looking at the bigger picture, I think before we start saying that any
particular strategy is bad we need to gather *much* more data and
experience on good and bad practices. A couple of days ago there was a
thread where Rob was reporting on the strategy we've taken in a
particular Talis product, that involves providing all the data in the
RDF description of a resource that may be required to build the
corresponding HTML page; Richard stated that this was the right thing
to do. I can see the appeal of this strategy, but is this not a
smaller-scale instance of the same problem of data duplication and
proliferation? I'm not saying that I have an answer, or even have an
opinion one way or the other; all I am saying is that it's early days
still and we don't know enough yet about which strategies work well
and when. Let's not dismiss things out of hand.

> copying will only create wrong duplicates once you change a bit of the
> data, e.g. correct the spelling of my last name in the page.

Sorry for the typo with your name. AFAICT I've changed all the
incorrect instances of this. If you spot any more after a hard page
refresh just let me know :)

> if its one source, then fine, the source is changed and its indexed
> again if it has been copied.. everybody loses, i'd say :-)

Or I send Knud a new dump which he loads in place of the other one ;)
Is there really a difference?

It's pretty hard for me to make any sense of the discussion that
follows from here, but it seems that the best thing to do is this:
I'll mint URIs in the ldow2009 namespace for papers, authors, chairs
and pc members, and sameAs them to the corresponding URIs in the
dogfood namespace. That way every need is addressed, right?

Cheers,

Tom.
Received on Friday, 20 March 2009 19:39:16 UTC

This archive was generated by hypermail 2.3.1 : Sunday, 31 March 2013 14:24:20 UTC