W3C home > Mailing lists > Public > public-sweo-ig@w3.org > January 2008

Re: HTTP URIs for real world objects

From: Reto Bachmann-Gmür <reto@gmuer.ch>
Date: Wed, 16 Jan 2008 12:08:38 +0100
Message-ID: <478DE5B6.10907@gmuer.ch>
To: Leo Sauermann <leo.sauermann@dfki.de>
CC: public-sweo-ig@w3.org, semantic-web@w3.org
Hi Leo

I'm still not sure which ancient TAG decision you're referring to. In 
the not too ancient httpRange-14 resolution[1]| the TAG describes good 
practice says: "Authorities MAY create HTTP URIs for non-information 
resources in addition to those for information resources."- I can't find 
anything chaning this MAY into a SHOULD in between.

I'm not suggesting to change the whole architecture of the internet, 
security isn't always an issue and PGP web of Trust or protocols like 
HTTPSY demonstrate that security is possible over the internet. I'm not 
even against using HTTP uri's on the Semantic Web. For instance using 
HTTP URIs as value for example of skos:isPrimarySubjectOf still makes 
the data linkable and the  definition retrievable over HTTP, such a 
practice however allows to declare multiple description documents using 
multiple protocols. Futhermore, if the HTTP system is compromised we 
have one wrong statement which is likely to be recognizable as such, or 
at least expressible as in [wot:fingerprint 
"04FFF3AC57DF217C6D383DBC0110FB923756EA0B"] owl:differentFrom 
[skos:PrimarySubjectOf <http://www.example.org/data/alice>]. If however 
an evil Alice squats http://www.example.com/id/alice using the URI to 
describe herself there's no way even to express that 
<http://www.example.com/id/alice> is no longer 

I've been arguing[2] that the semantic web is not an Aristotelian 
classification system based on the idea that everything can be named and 
have an authoritative definition. If naming real-world resources as 
HTTP-URIs becomes a SHOULD-level requirement the semantic web would not 
only be such a system but a vulnerable and insecure implementation .


1. http://www.w3.org/2001/tag/doc/httpRange-14/2007-05-31/HttpRange-14
2. http://www.nettime.org/Lists-Archives/nettime-l-0712/msg00057.html

Leo Sauermann wrote:
> Hi Reto,
> the document you discuss is describing the already-done-decisions of 
> the TAG to a greater audience. The decisions have been made long ago 
> and the document explains them. We cannot incorporate your feedback as 
> it suggest changing the TAG decisions, which is against the goal of 
> the document.
> Your answer suggests to replace the whole architecture of the 
> internet, including DNS and HTTP.
> You may want to join the W3C and start a working group on this.
> Otherwise, the right mailinglist to discuss this is semantic-web@w3.org
> greetings and best wishes
> Leo
> It was Reto Bachmann-Gmür who said at the right time 15.01.2008 22:15 
> the following words:
>> From http://www.w3.org/TR/2007/WD-cooluris-20071217/
>>> Given only a URI, machines and people should be able to retrieve a 
>>> description about the resource identified by the URI from the Web. 
>>> Such a look-up mechanism is important to establish shared 
>>> understanding of what a URI identifies. Machines should get RDF data 
>>> and humans should get a readable representation, such as HTML. The 
>>> standard Web transfer protocol, HTTP, should be used.
>> I think there are reasons to deprecate use of HTTP URIs for 
>> real-world object as promoting the assumption that dereferencing such 
>> a URI yields to an authoritative definition is dangerous.
>>    * DNS is centralistic
>>          o I don't know if control has passed from the US DoC to UN
>>            WSIS or to someone else. The root servers are controlled by
>>            a more or or less democratic central authority and so are
>>            the different top-level domains. Relaying on HTTP URIs is
>>            relying on the DNS system which means trusting all
>>            authorities of the different levels of the domain name. This
>>            seems incompatible with the design principle of
>>            decentralization [1].
>>    * HTTP is insecure
>>          o One cannot know if an HTTP response comes from where its
>>            supposed to come from, or whether it has been modified or
>>            read on the way to my computer. Even if I can still encrypt
>>            all my actual communication, having to look up the
>>            definitions of the used terms over an unencrypted connection
>>            compromises my privacy.
>>    * Uncool URIs happen
>>          o In an ideal world Alice will always control the response for
>>            http://www.example.com/id/alice. In the real world however:
>>                + Alice's server might by cracked
>>                + Alice might have forgotten to renew the domain name
>>                + Alice might be unable to pay for the hosting or for
>>                  the domain
>>                + The revolutionary guard might have taken control over
>>                  7 root servers redirecting all imperialistic domains
>>                  to educational content :)
>>                + ...
>> Cheers,
>> Reto
>> 1. http://www.w3.org/2003/01/Consortium.pdf

Received on Wednesday, 16 January 2008 11:08:49 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 6 January 2015 20:28:58 UTC