Re: Summary various 'about:' solutions (rough first cut)

This continues to be a useful exercise...

On Mar 08, 2004, at 16:14, ext Dirk-Willem van Gulik wrote:

> On 08/03/2004, at 2:54 PM, Patrick Stickler wrote:
>>> ->	MGET
>>> 	Treat the URL as a URL - but add an MGET method; which unlike
>>> 	the GET method returns information about that what the GET method
>>> 	would have returned. [1]
>>> 	E.g. the URL is no longer the location of the resource - but also 
>>> the
>>> 	location of the data about the location.
>> Er. Not quite. The URI denotes a resource. One can use GET to request
>> a representation of that resource. Or one can use MGET to request a
>> concise bounded description of that resource. There is no need to 
>> posit
>> any concept of "location"; either of the resource or its description.
> Thanks - what I meant was more that in combination with some of the
> other methods
> ->	MGET assumes that there is a place (i.e. location) you can 'get'
> 	things from.

Actually, it doesn't.

A particular implementation may utilize a knowledge portal as the
centrally managed source of descriptions for particular resources,
and MGET requests may simply be redirected to that portal -- but
it's just as acceptable for a server to hide that implementational
detail and respond with the description directly.

URIQA forces no explicit definition of a 'location' nor any distinction
between the resource and where resources are described.

True, those descriptions have to reside *somewhere*, but that is
completely within the scope of the implementation and URIQA says
nothing whatsoever about how or where descriptions are managed.

>>> MGET:	extra code in the origin (web) server to understand MGET
>>> 		extra text in the document describing HTTP
>> ??? Do you mean modification of the HTTP spec? I don't see that
>> as a requirement. The specification of WebDAV is distinct from HTTP.
>> I see no reason why the specification of URIQA would not be distinct
>> from HTTP.
> Will make this clear - what I meant was - some sort of document 
> documenting
> what 'MGET' is.

Er. Yes. Of course. But documentation is necessary for whatever approach
is taken, so I don't understand how this serves as a useful point of
comparison between the approaches -- unless you're talking about the
*amount* of documentation required to describe the approach.

(the URIQA specification is less than 5 printed pages ;-)

>> I don't see how URIQA imposes any "extra" code in the agent (and, 
>> believe
> Well - with the Agent I mean the thing extracting the information; i.e.
> it needs to do a
> 	write( socket, "MGET ...."
> at some point - in addition to its 'GET' and 'HEAD' or whatever else 
> it does ?

A URIQA agent would not use GET or HEAD or the like. It would simply
use MGET (or another URIQA method) and execute the HTTP request the
same as any other kind of HTTP request, and process the results.

>> If anything, URIQA is the lightest, most efficient approach insofar
>> as client implementation is concerned.
> Actually I can see the people who add a Header (which are parsed 
> already) or
> a <!-- field --> in the HTMl argue the same :-)

Let me qualify.

URIQA is the lightest, most efficient approach that *works*
in a scalable, safe, and robust manner, insofar as client implementation
is concerned.


>>> MGET:	Pro:	you can get at the metadata right away
>>> 			with an MGET
>>> 		-	Little code in the client/agent needed.
>> No special code whatsoever in the client/agent needed.
> Respectfully I disagree - anything which needs to get at the
> descriptions will need to do 'MGET' and have some code (no
> matter how simple) to get it.

Again, any vanilla non-URIQA HTTP client that can accept RDF/XML
could more effectively use URIQA with no change to the client
architecture -- simply by specifying URIQA-specific HTTP
request parameters.

URIQA imposes no change whatsoever on HTTP client architecture.




Patrick Stickler
Nokia, Finland

Received on Tuesday, 9 March 2004 04:54:49 UTC