W3C home > Mailing lists > Public > public-xg-webid@w3.org > January 2012

Re: fixed https://foafssl.org/test/WebId

From: Kingsley Idehen <kidehen@openlinksw.com>
Date: Fri, 06 Jan 2012 09:16:16 -0500
Message-ID: <4F070230.5010602@openlinksw.com>
To: public-xg-webid@w3.org
On 1/6/12 7:35 AM, Mo McRoberts wrote:
> On 6 Jan 2012, at 12:04, Kingsley Idehen wrote:
>
>> But, the semantics of fragment identifier don't really mandate comprehension on the part of user agents, solely.
> The semantics of HTTP does.
>
>> Thus, an HTTP server can do what the user agent failed to handle by processing a request for URL module fragment id. Now, to somewhat complicate matters, if the HTTP server is a Linked Data [Resource] Server (i.e, not an Information [Resource] Server) it can, by way of transparent content negotiation qos algorithm infer the user agent seeks the description of a name subject which it translates (via re-write rule) into:
>
> There is no way for a client to know in advance that a server will be able to process a request-URI including a fragment identifier, and so Postel’s Law applies more than ever. It’s counterproductive to both the client and the server to send it without having this knowledge in advance.

Transparent Content Negotiation (TCN) isn't counterproductive.

There is nothing wrong with a server being able to handle URLs with 
fragement identifiers, if for some reason they arrive as part of an HTTP 
payload. This is just about being defensive on the server side. 
Personally, I don't advocate sending URLs with fragment identifiers over 
the wire. What I advocate is the ability to be as accommodating as 
possible, especially if the protocol in question allows that.

Remember, I am sure others can chime in here re. history. The whole 
fragment identifier over the wire issue arose from a typo in the specs 
way back. I do recall stumbling across this in a conversation thread a 
while back re. fragment identifier URLs and HTTP.

>
> There are three possible outcomes for such a request:
>
> 1. The server treats it differently, providing only the data related to that subject URI and nothing else
> 2. The server treats it identically as though there were requested without the fragment
> 3. The server responds with a 4xx (or other) status

HTTP lets clients and server talk, intelligently via negotiation and 
quality of service algorithms.


>
> In case (1), things are fine. In case (2), you have to parse the received data and extract only that relating to the subject URI. As you have no real way to distinguish a response according to (1) or (2), these would most likely be the same code-path. In case (3) — most likely to be the case on the balance of probabilities — you have to retry the request with the fragment stripped.

You are talking about a conventional HTTP server for content. I am 
talking about an HTTP server that's tightly coupled with SPARQL. The 
request for a URL with a fragment identifier can be translated (by 
combination of TCN and re-write rules) into a SPARQL DESCRIBE. The 
server can decide that you are seeking a Descriptor Document and the URL 
is really a generic Entity/Object Name.

Does this introduce a burden on Apache, Tomcat and friends? Of course it 
does. But they aren't the only kinds of HTTP servers when operating in 
the Web's data space dimension were the focal point if descriptor 
resources i.e., resource that represent data objects.

>
> Thus, whatever happens you need to have code which strips the fragment, and you need to process responses which describe more than just the subject you’re interested in. Given that, there’s no clear benefit to the additional complexity required by not performing the simplest of simple string manipulation before making the request.

Flexibility is always a feature. Yes, it can be complex to achieve.
>
> I’m not sure why there’s any confusion on this point. Even if the specs aren’t clear, the reality of the processing hoop-jumping should be enough dissuade anybody from thinking it might be a good idea.
>
>
>> 1. sparql describe url
>> 2. sparql construct url.
>>
>> If it can't do the above, then, yes it can 404 or even 406.
> or, indeed, 410. It _is_ a bad request, in every technical sense. '#' isn't a valid character to appear in the Request-URI.

That too, at least the user agent receive some idea as to what's wrong.

>
>> Note: a Linked Data [Resource] Server is responsible for serving up Object/Entity descriptor resources to user agents. In a sense, they act on the missing DESCRIBE verb re. HTTP, which you get (deftly) via a sparql describe URL.
> This distinction between a “Linked Data [Resource] Server” and an “Information [Resource] Server” is rather arbitrary, and runs counter to the principles of the WWW as far as I can see.

It doesn't since the WWW has many interaction dimensions. The world is 
most familiar with the Information Space dimension. Linked Data is all 
about the Data Space dimension.  Of course, certain best practices 
enable smooth (non disruptive) oscillation between the Information and 
Data Space dimensions.
>
> M.
>


-- 

Regards,

Kingsley Idehen	
Founder&  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca handle: @kidehen
Google+ Profile: https://plus.google.com/112399767740508618350/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen








Received on Friday, 6 January 2012 14:19:09 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 6 January 2012 14:19:09 GMT