- From: Ian Davis <me@iandavis.com>
- Date: Fri, 5 Nov 2010 10:24:43 +0000
- To: nathan@webr3.org
- Cc: Leigh Dodds <leigh.dodds@talis.com>, Harry Halpin <hhalpin@ibiblio.org>, public-lod@w3.org, Doug Schepers <schepers@w3.org>
On Fri, Nov 5, 2010 at 10:05 AM, Nathan <nathan@webr3.org> wrote: > > Not at all, I'm saying that if big-corp makes a /web crawler/ that describes > what documents are about and publishes RDF triples, then if you use 200 OK, > throughout the web you'll get (statements similar to) the following > asserted: > > </toucan> :primaryTopic dbpedia:Toucan ; a :Document . i don't think so. If the bigcorp is producing triples from their crawl then why wouldn't they use the triples they are sent (and/or content-location, link headers etc). The above looks like what you'd get from a third party translation of the crawl results without the context of actually having fetched the data from the URI. If the bigcorp is not linked data aware then today they will follow the 303 redirect as a standard HTTP redirect. rfc2616 says that the target URI is not a substitute for the original URI but just an alternate location to get a response from. The bigcorp will simply infer the statements you list above **even though there is a 303 redirect**. As rfc2616 itself points out, many user agents treat 302 and 303 interchangeably. Only linked data aware agents will ascribe special meaning to 303 and they're the ones that are more likely to use the data they are sent. Ian
Received on Friday, 5 November 2010 10:25:17 UTC