W3C home > Mailing lists > Public > semantic-web@w3.org > November 2006

Re: "Hash URIs" and content negotiation

From: Karl Dubost <karl@w3.org>
Date: Fri, 10 Nov 2006 11:09:05 +0900
Message-Id: <0DE07E9D-63C8-4262-B2BA-D55FD81BE72F@w3.org>
Cc: Semantic Web <semantic-web@w3.org>
To: Alan Ruttenberg <alanruttenberg@gmail.com>


Le 8 nov. 2006 à 01:32, Alan Ruttenberg a écrit :
> On Nov 7, 2006, at 10:50 AM, Dan Brickley wrote:
>> You're very right of course, it's problematic to conneg in context  
>> of such URIs. This is why I always preferred slash URIs! Ah well...
>
> Personally, I can't tell why content negotiation is a good idea in  
> any context. To my mind it's hiding interesting information in the  
> innards of a network protocol instead having it explicitly  
> available, in say, OWL or RDF.

because there are cases where it is difficult to do in another way.  
Content-Negotiation is a tough issue with many faces. It is not only  
a linear list of choices, but a multi-dimensional matrix
	- languages (fr, en, ja, …)
	- format of representation (png, gif, html, …)
	- format of transport (gzip)
	- …

* Discoverability depending on formats
For example, there is no obvious linking mechanism in PNG, GIF or  
JPEG to list alternate URIs of the "same" content *inside* the  
content. How do I say inside a PNG, that there is a GIF version.


* Keeping up to date - Cost of management
Another issue is updating. For example, with languages, if we got  
let's say a version of our HTML document in French, then it has been  
translated in Japanese and Korean later on. We have to update 3 files,

    <link rel="Alternate"
          href="index.html.ja"
          hreflang="ja"
          title="Version japonaise"/>

then when we add another one, we have now to update 4 files, and so  
on. It becomes very difficult to update.


* Wrong Content-Type
I was wondering how the "hash uris" proposal is working in the  
context of wrong content-type sent by the server. Is there someone  
who played with this a bit making cases with obviously bogus files  
and then thinking about which mechanisms, we could put in place to  
recover or notify of the problems

There is something missing which could help a Web site to expose its  
information space map, a bit ala sitemap of Google.





* On Linking Alternative Representations To Enable Discovery And  
Publishing
   http://www.w3.org/2001/tag/doc/alternatives-discovery
   TAG Finding 1 November 2006
* Transparent Negotiation - the Missing HTTP Feature
   http://www.w3.org/QA/2006/10/missing_http_feature
   QA Weblog 20 October 2006
* Google Sitemap Gen
   http://goog-sitemapgen.sourceforge.net/


-- 
Karl Dubost - http://www.w3.org/People/karl/
W3C Conformance Manager, QA Activity Lead
   QA Weblog - http://www.w3.org/QA/
      *** Be Strict To Be Cool ***
Received on Friday, 10 November 2006 02:09:37 GMT

This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 21:45:12 GMT