RE: MGET and machine processing

-------- Original Message --------
> From: Patrick Stickler <mailto:patrick.stickler@nokia.com>
> Date: 26 November 2003 09:59
> 
<snip/>

> My #1 requirement is that, for any arbitrary URI which is meaningful
> to the HTTP protocol, and contains a web authority component, a SW agent
> should be able to send a request to that web authority to obtain a
> concise bounded description of the resource denoted by that URI, with
> a single request, and without any further information than the URI
> itself
> (and the generic protocol by which the request is made) and recieve in
> response either a description, or an error indication of some fashion.

<snip/>



Patrick - could you spell out a concrete use case?  I understand the
mechanisms you are proposing, it is the underlying need I am less clear
about and as to whether this is the only solution.

- - - -

A web authority has a number of responsibilties.  Often there is spearation
between the responsibility for naming and the responsibility for running the
server and its software.  Suppose a web authority delegates the naming
authority for part of a web authority's namespace; this does not lead to
delegation of all aspects of a HTTP URI, in particular authority for naming
is not authority for what software to run.

We need approaches that enable a wide range of cases for people to put up
descriptions of resources.  These could be (may have to be) based on
convention, not architectural design, that enable the situation where naming
authority only extends to being able to put up static pages (that is, ISP
hosted pages).  There is no single answer here.

The requirement that we need to modify HTTP, web servers and client net
libraries, to add a new verb would seem to create a restriction in who can
easily get descriptions onto the semantic web.  The barrier to entry is high
and it effectively creates two webs - there should not be "web servers" and
"semantic web enabled web servers".  We have to live with the starting point
even if it is not the one we might have choosen, given a completely clean
start.

We also need to have 3rd party descriptions - these could be as important as
the need for authority descriptions. A way of associating a knowledge base
with a URI can be done in a number of ways (not always very nice);
convention, by asking the server etc etc.  You argue against double-tripping
- doing some operation before every GET.  In practice, its not every GET.
Doing the discovery operation would yield a range of URIs for the
description authority and also could be cached making it a one-time
overhead.

We are starting from the existing web.  We need to find approaches that
build on the web.

	Andy

Received on Thursday, 27 November 2003 09:14:14 UTC