W3C home > Mailing lists > Public > public-lod@w3.org > June 2011

Re: Squaring the HTTP-range-14 circle

From: Richard Cyganiak <richard@cyganiak.de>
Date: Mon, 13 Jun 2011 12:41:32 +0100
Cc: Christopher Gutteridge <cjg@ecs.soton.ac.uk>
Message-Id: <8A1B3A13-195A-4BF5-9C69-FF904BA63857@cyganiak.de>
To: public-lod@w3.org
On 13 Jun 2011, at 09:59, Christopher Gutteridge wrote:
> The real problem seems to me that making resolvable, HTTP URIs for real world things was a clever but dirty hack and does not make any semantic sense.

Well, you worry about *real-world things*, but even people who just worry about *documents* have said for two decades that the web is broken because it conflates names and addresses. And they keep proposing things like URNs and info: URIs and tag: URIs and XRIs and DOIs to fix that and to separate the naming concern from the address concern. And invariably, these things fizzle around in their little niche for a while and then mostly die, because this aspect that you call a “clever but dirty hack” is just SO INCREDIBLY USEFUL. And being useful trumps making semantic sense.

HTTP has been successfully conflating names and addresses since 1989.

There is a trillion web pages out there, all named with URIs. And even if just 0.1% of these pages are unambiguously about a single specific thing, that gives us a billion free identifiers for real-world entities, all already equipped with rich *human-readable* representations, and already linked and interconnected with *human-readable*, untyped, @href links.

And these one billion URIs are plain old http:// URIs. They don't have a thing:// in the beginning, nor a tdb://, nor a #this or #that in the end, nor do they respond with 303 redirects or to MGET requests or whatever other nutty proposals we have come up with over the years to disambiguate between page and topic. They are plain old http:// URIs. A billion.

Then add to that another huge number that already responds with JSON or XML descriptions of some interesting entity, like the one from Facebook that Kingsley mentioned today in a parallel thread. Again, no thing:// or tdb:// or #this or 303 or MGET on any of them.

I want to use these URIs as identifiers in my data, and I have no intention of redirecting through an intermediate blank node just because the TAG fucked up some years ago.

I want to tell the publishers of these web pages that they could join the web of data just by adding a few @rels to some <a>s, and a few @properties to some <span>s, and a few @typeofs to some <div>s (or @itemtypes and @itemprops). And I don't want to explain to them that they should also change http:// to thing:// or tdb:// or add #this or #that or make their stuff respond with 303 or to MGET requests because you can't squeeze a dog through an HTTP connection.

And here you and Pat and Alan (and TimBL, for that matter) are preaching that we can't use this one billion of fantastic free URIs to identify things because it wouldn't make semantic sense.

Being useful trumps making semantic sense. The web succeeded *because* it conflates name and address. The web of data will succeed *because* it conflates a thing and a web page about the thing.

<http://richard.cyganiak.de/>
    a foaf:Document;
    dc:title "Richard Cyganiak's homepage";
    a foaf:Person;
    foaf:name "Richard Cyganiak";
    owl:sameAs <http://twitter.com/cygri>;
    .

There.

If your knowledge representation formalism isn't smart enough to make sense of that, then it may just not be quite ready for the web, and you may have some work to do.

Best,
Richard
Received on Monday, 13 June 2011 11:42:01 UTC

This archive was generated by hypermail 2.3.1 : Sunday, 31 March 2013 14:24:33 UTC