RE: httpRange-14

> -----Original Message-----
> From: Bill de hÓra [mailto:dehora@eircom.net]
> Sent: Tuesday, July 29, 2003 9:19 AM
> To: Dare Obasanjo
> Cc: Sandro Hawke; Roy T. Fielding; Tim Berners-Lee; Norman Walsh;
> www-tag@w3.org
> Subject: Re: httpRange-14
> 
> 
> 
> Dare Obasanjo wrote:
> 
> [[[
> I've always thought this was the obvious solution to the
> RDF issue instead of engaging in months of pointless metaphysical
> debates on the true nature of URIs. RDF needs unambiguous identifiers
> that don't have the baggage that exists with current URI schemes and
> their association with network retrievable information resources.
> ]]]
> [that was one big line - mua problem?]
> 
> Well nothing in the model theory would break. Don't know about the 
> deployed tools - I guess all their (URI) equivalence methods would 
> break. In my experience the URIs are the biggest assumption you'd 
> take on with an RDF tool.
> 
> The main arguments against that idea are:
> 
>   o - RDF would then have nothing to to do with the current web 
> (other than being useful to describe and annotate bits of it). 
> Possibly that would unsettle people.

Yes, that would be unsettling.  Don't we want the Semantic web to have 
a relationship to the REST web?

Don't we want the thing denoted by a URI, used as a SW-URI by shared
agreement of its meaning, to have some meaningful relationship to the 
representations returned by the web machinery from that URI, used as a 
HTTP-URI?  And to select language for the architecture document that 
makes recommendations as to what that relationship should be.

How about a goal of construction?  I mean this.  Suppose 
one day in the future after the Semantic web has been humming along 
within, or beside, or above the REST web for a time, the Semantic web 
goes down, but the REST web does not.  None of the ontologies or schemas 
can be used.  The agents in the communities who shared interpretations 
of some malfunctioning triples ask for help from the experts.  Several 
semantic web scientists converge on the scene and promptly set about 
to try to recreate the meaning of the triples they find, which now appear 
as ciphers to them.  All they have to go on is the HTTP-URI 
that were used to write the assertions.  They take these URI, plug them 
into a browser to see what is returned.  This is all they have to go 
on to recreate the meaning of the broken triples.  

Lets hope that the writers of the triples find stuff at their URIs that 
will make reconstruction of the meaning of the triples possible, using 
only the stuff they get from the REST web when they input those 
URIs to it.

>   o - you need bind/deref mechanisms in any case; both to move the 
> triples aound and more importantly to find out more information 
> about the triples you have to hand. There's not a lot of point 
> creating and deploying a transfer protocol for the semantic web, 
> when we have 4 or 5 perfectly good ones already.
> 
> The other option is to build machinery to allow any ole URI to be 
> dereferenced, which makes stuff like DDDS very attractive.
> 
> Bill de hÓra
> 
> 
> 

Received on Wednesday, 30 July 2003 21:35:41 UTC