W3C home > Mailing lists > Public > www-rdf-interest@w3.org > March 2004

Re: HTTP Methods

From: Patrick Stickler <patrick.stickler@nokia.com>
Date: Tue, 9 Mar 2004 11:37:20 +0200
Message-Id: <5C6C9056-71AD-11D8-B899-000A95EAFCEA@nokia.com>
Cc: www-rdf-interest@w3.org, "ext Seaborne, Andy" <andy.seaborne@hp.com>
To: "ext Dirk-Willem van Gulik" <dirkx@asemantics.com>


On Mar 08, 2004, at 16:02, ext Dirk-Willem van Gulik wrote:

>
> On 08/03/2004, at 2:08 PM, Patrick Stickler wrote:
>
>> On Feb 26, 2004, at 15:26, ext Seaborne, Andy wrote:
>
>>> The obvious suggestion: do a HEAD (or OPTIONS) operation and have a  
>>> response
>>> header field.  But that is an extra round trip.  (Aside: The round  
>>> trip is
>>
>> I considered this approach at length in Cannes and ended up rejecting
>> it for two primary reasons: (1) is pushes implementational complexity
>> onto the client, which is inefficient both in terms of processing and
>> implementational effort, given the usual ratio of clients to servers,
>
> Though I agree with the sentiment of the above - It should be noted  
> that all methods proposed sofar require changes (or at least)  
> awareness of the clients in order to make use of the  information.
>
> ...
>
> But each requires an augmenting of the client or agent one way or  
> another.

I'm sorry, but I disagree.

Requiring an agent to include a particular header and specify a  
particular
method is exactly what HTTP clients do. How does specifying a value for
URIQA-uri: and specifying the MGET method require "augmenting" the  
client
in any way, since it's perfectly normal for an HTTP client to e.g.
specify a value for Accept: and specify the POST method.

The fact that the client also will be able to eat RDF is also nothing  
new
for existing clients that submit GET requests and expect RDF/XML  
representations
in response.

URIQA imposes *no* modifications to existing HTTP clients. All  
enhancments are
restricted to the server side -- and can also be accomodated by a  
specialized
proxy rather than modifying each and ever server itself.

For clients, it's business as usual.

>
> They do differ in some key aspects though (more detail at  
> http://lists.w3.org/Archives/Public/www-rdf-interest/2004Mar/ 
> 0019.html)
>
> ->	Is the metadata co-located with the data itself ?

URIQA is agnostic. Implementations are free to decide.

>
> ->	Does the manager of the data need to get involved in this,
> 	OR does the software need to be changed, or can it be done
> 	external ?

URIQA is agnostic. There are ways to keep the management of
representations disjunct from the management of descriptions
(though one would typically expect involvement of the
resource owners in some facet of the solution).

>
> ->	how rich is it, how much URL space polution is there.

URIQA is agnostic. It doesn't mandate the minting of distinct
URIs to denote descriptions, but it fully facilitates doing
so, and providing for descriptions of descriptions.

>
> ->	does it work for just http (or just http+html) or for
> 	anything with a URI, or just those with a URL.

URIQA is HTTP specific and works with any URI that is meaningful
(dereferencable) to HTTP.

However, DDDS can be employed for non-HTTP meaningful URIs to
obtain an alias URI which is meaningful to HTTP, via which
URIQA can provide descriptions.

Thus, just as DDDS can provide for resolving a URN to a
representation via an alias (e.g. http:) URI; likewise,
DDDS can provide for resolving a URN to a description via
URIQA.

Since resolution via DDDS is anyway dependent on there being
some other URI which is meaningful to some protocol via which
representations can be accessed, one can't effectively have
e.g. DDDS without HTTP.

If folks think URNs are necessary (and I personally don't think
they are) then fine, but ultimately, if you want data (representations
or descriptions) then HTTP is the best game in town.

>
> etc - IMHO each very different in terms of trade offs. Now where does
> one want the final trade off to be

Looks like URIQA has no real tradeoffs, other than support for the
new methods.

>
> ->	simple for the client ?

URIQA is. No special extensions or machinery is required by URIQA not
already present in any HTTP client able to accept RDF/XML.

> ->	simple/fitting the operational/organisational regimes of the
> 	creators/managers of the data ?

URIQA is. And this is proven in a global, real-world deployment within  
Nokia.

> ->	as light weight as possible ?

URIQA is. Demonstrated by the open source implementation.

> ->	very generic, or very specific
> ->	just http+html, or anything http ?
> 	.. etc.

URIQA is as generic as is practical, given the need for a globally
deployed, ubiquitous infrastructure for interacting with authoritative
servers.

>
> A long list can be made - and the RFCs suggest various architectural
> and technical constraints with regard to this.

???

>
>> and (2) it would be more efficient to simply return the URI of a  
>> portal
>> via which the authoritative description could be obtained, with the
>> protocol(s) required to interact with that portal defined by a  
>> standard
>> such as is expected to arise from the DAWG work.
>
> Assuming that the URI you are acting on is in fact a URL

This distinction is not meaningful from a RESTful perspective.

A URL is simply a URI that can be dereferenced to access one or
more representations.

Since the ability to access representations can change over time,
a given URI may act as URL, or not act as a URL at different times.

"URL" reflects a perception or expectation regarding the utility of
a URI to access representations. And with DDDS, *ANY* URI can be
treated as a URL, if the redirection/resolution process is opaque
to the client.

> and that that
> URL can be contacted to get the URI of the authoritative description.
>

I was specifically addressing the efficiency argument being made
such that since you do an OPTIONS to get the rule/pattern by which
the resource URI is transformed into the description URI, caching
improves efficiency since even if you execute that OPTIONS request
per the web authority URI many times for many terms grounded in
that web authority, you don't have the efficiency hit that you do
if you called HEAD on each term.

Patrick

--

Patrick Stickler
Nokia, Finland
patrick.stickler@nokia.com
Received on Tuesday, 9 March 2004 04:39:37 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 18 February 2014 13:20:06 UTC