Re: Is 303 really necessary? (dealing with ambiguity)

On Mon, 2010-11-08 at 16:18 +0900, Tore Eriksson wrote:
> Hi David,
> 
> David Booth wrote:
[ . . . ]
> > And others may well make statements
> > about that web page.  For example, someone crawling the web may make a
> > statement saying that <http://iandavis.com/2010/303/toucan> returned
> > 1027 bytes in response to a GET request.  They may not say it in RDF --
> > they might say it in XML or any other language.
> 
> As long as they they are aware that they are talking about a specific
> representation of this resource I can't see any problem with this. If
> they think they are stating something about the resource itself, well
> they would be wrong even if the current URI was an "information
> resource". They apparently need to learn more about web technology -
> representations, caching, con-neg, &c.

How about:

  "Ian Davis owns web page <http://iandavis.com/2010/303/toucan>."

  "The content at <http://iandavis.com/2010/303/toucan> was last updated
7-Nov-2010."

  "<http://iandavis.com/2010/303/toucan> has a page rank of
123,456,789."

Those statements are not talking about any specific representations, nor
are they talking about the toucan.  All are completely reasonable
statements for someone knowing nothing about RDF to make.

> [ . . . ]
> > So I don't think it is reasonable or realistic to think that we can
> > *avoid* creating an ambiguity by returning additional RDF statements
> > with the 200 response.  Rather, the heuristic that you propose is a way
> > for applications to *deal* with that ambiguity by tracking the
> > provenance of the information: if one set of assertions was derived from
> > an HTTP 200 response code, and another set of assertions was derived
> > from an RDF document that you trust, then ignore the assertions that
> > were derived from the HTTP 200 response code.
> 
> By not drawing ill-founded conclusions about the nature of the resource
> through the response code, ambiguity could have been avoided in the
> first place.

Apparently you and I disagree about what it means to be a web page.  I
personally know of no better qualification criterion for something being
a web page than if that thing returns a 200 status code in response to a
GET request.  Perhaps one would characterize this as duck typing:
http://en.wikipedia.org/wiki/Duck_typing
What other criteria would you use?  



-- 
David Booth, Ph.D.
Cleveland Clinic (contractor)
http://dbooth.org/

Opinions expressed herein are those of the author and do not necessarily
reflect those of Cleveland Clinic.

Received on Monday, 8 November 2010 21:36:16 UTC