W3C home > Mailing lists > Public > www-tag@w3.org > February 2003

RE: Valid representations, canonical representations, and what the SW needs from the Web...

From: <Patrick.Stickler@nokia.com>
Date: Tue, 4 Feb 2003 15:19:26 +0200
Message-ID: <A03E60B17132A84F9B4BB5EEDE57957B5FBB07@trebe006.europe.nokia.com>
To: <dehora@eircom.net>
Cc: <paul@prescod.net>, <jbone@deepfile.com>, <sandro@w3.org>, <www-tag@w3.org>



> -----Original Message-----
> From: ext Bill de hÓra [mailto:dehora@eircom.net]
> Sent: 04 February, 2003 15:02
> To: Stickler Patrick (NMP/Tampere)
> Cc: paul@prescod.net; jbone@deepfile.com; sandro@w3.org; 
> www-tag@w3.org
> Subject: Re: Valid representations, canonical 
> representations, and what
> the SW needs from the Web...
> 
> 
> Patrick.Stickler@nokia.com wrote:
> 
> > 
> > Or far better, 
> > 
> > GET  http://www.prescod.net gives you a representation of a resource
> > MGET http://www.prescod.net gives you knowledge of a resource
> 
> I'm not sure that this is far better - making a separation between 
> 'resource knowledge' and 'resource snapshot' seems somewhat arbitrary.

It's anything but arbitrary. It reflects the primary focus of the
Web versus the SW.

The point of intersection between the two is that both deal with
resources denoted by URIs.

The Web focuses on representations of those resources.
The SW focuses on knowledge about those resources.

The separation between representation and knowledge is every bit
as non-arbitrary as the distinction between the Web and SW.

HTTP provides a proven, global, scalable distributed solution for
access of representations of resources based on the URIs denoting
those resources. That same success can be carried over to the SW
by using the same architecture and general infrastructure for
access of knowledge about resources based on the same URIs
denoting those resources.

Trying to make the Web work for the SW doesn't work, because the
Web doesn't care about knowledge. It just cares about representations,
and trying to treat knowledge as representations leads to ambiguity
which undermines the whole point of the SW.

> > To date, the primary suggestions have centered around treating
> > knowledge about resources as representations of those resources,
> > which I consider to be the crux of the problem.
> > 
> > Once you keep knowlege and representations disjunct, and realize
> > that the Web cares about representations and the SW about knowledge,
> > and both needs can be provided by the same essential architecture
> > as it (almost) stands (HTTP) but requiring adjustments to 
> maintain that
> > crucial distinction between representation and knowledge (GET vs.
> > MGET, etc.) then all is well, and both the Web and SW can agree
> > about resources and URIs and that URIs denote resources, etc.
> > and the Web can concern itself with representations without 
> troubling
> > about knowledge and the SW can concern itself with knowledge
> > without troubling about representations, and URIs tie the two
> > together quite nicely, consistently, and without conflict or
> > ambiguity.
> > 
> > Problem solved.
> 
> Still not convinced there is an architecture/protocol problem (there 
> may be an engineering/programming problem). As well as all that, 
> there is as I said before, the whole matter of seeing an MGET method 
> deployed - difficult, imo. Creating new verbs is not cheap, which is 
> why there are so few of them in HTTP. 

Well, no'one said creating the SW would be easy, or even as easy as
creating the Web. 

> I would rather see HEAD 
> enriched to provide resource directed information, 

I fail to see how that is any less difficult than a few new verbs
to implement and promote, and consider that to be far less clean
a solution, since that doesn't address how to PUT or DELETE
knowledge about a resource, confuses the semantics of HEAD
which is clearly stated as returning metadata about the
representation (IMO), etc. 

Adding SW specific verbs allows existing HTTP servers to function
in a fully compatible manner with SW-capable HTTP servers, and
those who care about knowledge and the SW will choose to update
their servers just as those who care about Web-based content
management might choose to update or extend their servers to 
support WebDAV extensions to HTTP.

In fact, the WebDAV approach seems to me to be a promising way
to go. The core HTTP specs remain the same, and additional
methods and responses are defined specific to metadata and
implemented as modular extensions to existing HTTP servers.

> assuming the HTTP 
> experts here don't see it as abuse.

I would expect most would. I do.

Regards,

Patrick

--
Patrick Stickler, Nokia/Finland, (+358 40) 801 9690, patrick.stickler@nokia.com
 
Received on Tuesday, 4 February 2003 08:19:35 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Thursday, 26 April 2012 12:47:16 GMT