- From: David Wood <david@3roundstones.com>
- Date: Fri, 5 Nov 2010 10:18:47 -0400
- To: nathan@webr3.org
- Cc: Harry Halpin <hhalpin@ibiblio.org>, Ian Davis <me@iandavis.com>, "public-lod@w3.org" <public-lod@w3.org>
- Message-Id: <26EC1A7A-A8EC-41FF-80C8-7686E1C2DD38@3roundstones.com>
Hi all, This message is an attempt to summarize my position in relation to the points made by others. I hope it is useful to the discussion. A new approach to the problem is discussed at the end of this message that proposes deprecating the 303 for use in Linked Data (only) in favor of a new HTTP response code. The new response code would state "The URI you just dereferenced refers to a description of a resource that may be informational, physical or conceptual. The information you are being returned in this response contains an RDF description of the resource you dereferenced." 1) Kudos to Ian for starting the most engaging discussion on this list in many moons. 2) I think we all agree that the SemWeb/Linked Data usage of the 303 is a hack, non-optimal and is worthy of reconsideration *presuming* that we can change the way a lot of people use the Web. I'm an optimist, so I'm willing to have a go. 3) Nathan (correctly, IMO) summarized the core problems with the 303 when he said: On Nov 4, 2010, at 16:23, Nathan wrote: > </thing> -> 303 -> </doc> > > (1) Many automated clients that make assertions about URIs treat HTTP as a blackbox, thus are still saying </thing> a :Document . (original problem not solved) > > (2) Many Humans are clicking on </thing> getting the </doc> URI in their address bar then using that instead, saying that </doc> a :Thing . (new problem) > > (3) Network effect of 303 (2 requests) vs 200 (single request), as well as deployment considerations. > > Completely leaving frag ids out of the equation, it appears (to me at least) that new insight is that 303 isn't addressing the problem (1) and rather introducing more (2) and (3). David Booth's comments are similar in scope and, though he puts it differently, agrees that the main issue is that "that many applications *will* wish to distinguish between the toucan and its web page (or between the toucan's age and the age of it's web page)". Indeed, the need for different URIs between a physical (or conceptual) resource and its (informational) description is the central issue. 4) Leigh Dodds is right to call me on the distinction between deprecating 303 and their use in Linked Data. I do not think 303 should be deprecated, but do not think they are the best solution for Linked Data. 5) I totally agree with Michael Hausenblas when he says: On Nov 5, 2010, at 05:29, Michael Hausenblas wrote: > It occurs to me that one of the main features of the Linked Data community > is that we *do* things rather than having endless conversations what would > be the best for the world out there. 6) We can't really hack at the 303 any more than we have. I explored that in 2007 and came up pretty empty: http://prototypo.blogspot.com/2007/08/returning-http-303s-for-semantic-web.html So where does that leave us? I think that the best way out is to return a single HTTP response for the resolution of a URI describing a physical or conceptual resource that unambiguously states that the returned response is a *description* of that resource. That leaves me in agreement with Phil Archer (http://philarcher.org/diary/303/) when he proposed a new HTTP response code. Let's just do it! It might work like this (using Ian's notation from http://iand.posterous.com/is-303-really-necessary): # Get an information resource: GET /doc returns a 200 (OK) # Get an information resource with an RDF description: GET /toucan_info responds with a 210 (Description Found) # Get a physical resource: GET /toucan_physical responds with a 210 (Description Found) # Get a conceptual resource: GET /toucan_definition responds with a 210 (Description Found) How does that stack up against Ian's objections to the 303? Rather well, actually. - It returns the required information in one GET. - Multiple descriptions can be linked from a resource's URI via the returned RDF. - A human using a browser stays at the same URI requested (and content negotiation would still work). - It is trivial to configure a Web server to match an HTTP status code to file extensions. - It can be implemented using a static Web server setup (one that serves just RDF). - It does not mix layers of responsibility. - It can be used with information resources, as well as physical and conceptual resources. - It is easy to explain to the broader community. Disadvantages are: - W3C/IETF buy-in, time and effort required to standardize. - Server operators would have to configure their Web servers to return the correct status code (until Web servers shipped with reasonable defaults). All that is left is to prototype it. It doesn't seem to break curl, so that's a start :) [[ Macadamia:~ dwood$ curl -I http://localhost:8090/ HTTP/1.1 210 Description Found Date: Fri, 05 Nov 2010 14:15:31 GMT Server: Hacked up Web server by prototypo ]] Regards, Dave
Received on Friday, 5 November 2010 14:19:28 UTC