W3C home > Mailing lists > Public > www-rdf-interest@w3.org > March 2004

Re: HTTP Methods

From: Dirk-Willem van Gulik <dirkx@asemantics.com>
Date: Mon, 8 Mar 2004 15:02:01 +0100
Message-Id: <2BE056F6-7109-11D8-9D02-000A27B4B4E0@asemantics.com>
Cc: www-rdf-interest@w3.org, "ext Seaborne, Andy" <andy.seaborne@hp.com>
To: Patrick Stickler <patrick.stickler@nokia.com>

On 08/03/2004, at 2:08 PM, Patrick Stickler wrote:

> On Feb 26, 2004, at 15:26, ext Seaborne, Andy wrote:

>> The obvious suggestion: do a HEAD (or OPTIONS) operation and have a 
>> response
>> header field.  But that is an extra round trip.  (Aside: The round 
>> trip is
>
> I considered this approach at length in Cannes and ended up rejecting
> it for two primary reasons: (1) is pushes implementational complexity
> onto the client, which is inefficient both in terms of processing and
> implementational effort, given the usual ratio of clients to servers,

Though I agree with the sentiment of the above - It should be noted 
that all methods proposed sofar require changes (or at least) awareness 
of the clients in order to make use of the  information.

Sofar we've seen methods which:

1>	Need know a few extra protocol methods (i.e. a MGET, META or
	INFO, etc) added to, say, HTTP

2>	They need to be able to detect, parse and know how to use a

	-> certain extra header.

	-> <!-- comment --> information in the payload

	-> some other payload construct likle XMP.

	-> and in some cases need to be able to recognize and
	then perform more complex parsing (grddl).

	(Assuming we're strictly limiting our world to http)

3>	They need to know an entire new SOAPish protocol
	-> lsid.

4>	They need to know the DDDS algorithm.

So client side - some of the above require protocol changes (and 
understanding of the interaction with CDN's, proxies and what not), 
some are very specific to HTTP, some are easy, some are hard.

But each requires an augmenting of the client or agent one way or 
another.

They do differ in some key aspects though (more detail at 
http://lists.w3.org/Archives/Public/www-rdf-interest/2004Mar/0019.html)

->	Is the metadata co-located with the data itself ?

->	Does the manager of the data need to get involved in this,
	OR does the software need to be changed, or can it be done
	external ?

->	how rich is it, how much URL space polution is there.

->	does it work for just http (or just http+html) or for
	anything with a URI, or just those with a URL.

etc - IMHO each very different in terms of trade offs. Now where does
one want the final trade off to be

->	simple for the client ?
->	simple/fitting the operational/organisational regimes of the
	creators/managers of the data ?
->	as light weight as possible ?
->	very generic, or very specific
->	just http+html, or anything http ?
	.. etc.

A long list can be made - and the RFCs suggest various architectural
and technical constraints with regard to this.

> and (2) it would be more efficient to simply return the URI of a portal
> via which the authoritative description could be obtained, with the
> protocol(s) required to interact with that portal defined by a standard
> such as is expected to arise from the DAWG work.

Assuming that the URI you are acting on is in fact a URL and that that
URL can be contacted to get the URI of the authoritative description.

Dw
Received on Monday, 8 March 2004 09:02:26 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 18 February 2014 13:20:06 UTC