Re: A(nother) Guide to Publishing Linked Data Without Redirects

In message 
<AANLkTikmg=+AUgjHLf-88q-6Jzd7=ZXZ2gsj-QDA1Xd+@mail.gmail.com>, Harry 
Halpin <hhalpin@ibiblio.org> writes
>
>The question is how to build Linked Data on top of *only* HTTP 200 -
>the case where the data publisher either cannot alter their server
>set-up (.htaccess) files or does not care to.

Might it help to look at this problem from the other end of the 
telescope? So far, the discussion has all been about what is returned. 
How about considering what is requested?

I assume that we're talking about the situation where a user (human or 
machine) is faced with a URI to resolve.  The implication is that they 
have acquired this URI through some Linked Data activity such as a 
SPARQL query, or reading a chunk of RDF from their own triple store. (If 
we're not - if we're talking about auto-magically inferring Linked 
Data-ness from random URLs, then I would agree that sticking RDFa into 
said random pages is a way to go, and leave the discussion.)

The Linked Data guidelines make the assumption that said user is willing 
and able to indicate what sort of content they want, in this case via 
the Accept header mechanism.  This makes it reasonable to further 
specify that the fallback response, in the absence of a suitable Accept 
header, is to deliver a human-readable resource, i.e. an HTML web page. 
Thus the web of Linked Data behaves like part of the web of documents, 
if users take no special action when dereferencing URLs.

If we agree that it is reasonable for user agents to take some action to 
indicate what type of response they want, then one very simple solution 
for the content-negotiation-challenged data publisher would be to 
establish a convention that adding '.rdf' to a URL should deliver an RDF 
description of the NIR signified by that URL.

Richard
-- 
Richard Light

Received on Thursday, 11 November 2010 09:55:12 UTC