Re: MGET and machine processing

On Tuesday, Nov 25, 2003, at 20:27 Europe/Helsinki, ext Jan Algermissen 

> Patrick Stickler wrote:
>> On Monday, Nov 24, 2003, at 16:55 Europe/Helsinki, ext Mark Baker 
>> wrote:
>>> On Mon, Nov 24, 2003 at 02:41:34PM +0200, Patrick Stickler wrote:
>>>> Well, while I consider it acceptable to treat a description as
>>>> a representation, it is nonetheless necessary to be clear about
>>>> the distinction when interacting with the server.
>>> Right.  Using a different URI would be another way to do that! 8-)
> Patrick--
> why not let the Web itself 'decide' where one can find a description of
> a resource? Today, if I need to do a site-bound search on a particular
> site I can use for example Google - the evolvement of the Web itself
> has lead to this well known service. Nobody asked for some HTTP
> extension to do a site-wide query, it just happened because it made
> (economic) sense to provide this service.

And I expect there to be knowledge repositories similar to Google
(or even, Google) which provide a means of querying knowledge harvested
from millions of sources around the globe.

But Google would not exist if it could not easily crawl servers
using generic, standardized methods.

And Google like SW services will not exist if they cannot easily
crawl servers using generic, standardized methods.

*Not* all RDF/XML available via GET is asserted by a given
authority. One very significant function that a distinct SW
method such as MGET provides is that it bears the explicit
notion of asserted knowledge about a given resource.

Furthermore, while large, consolidations of knowledge will
surely prove useful, they lose the distinction about which
bits of knowledge are authoritative versus which are hearsay
from 3rd parties (unless a sufficient infrastructure for tracking
sources of statements is added -- but even then, statements
harvested from some RDF document on some web server are not
necessarily authoritative and intended to be taken as asserted
by the owner).

There are *many* issues and requirements that the SW has that
the Web does not, so it should not come as any surprise that
the Web architecture, albeit an outstanding starting point, is
not sufficient to realize the fullness of the SW vision.

> As soon as the Web itself starts to demand a similar service for
> retrieving descriptions of resources, I think it won;t be long until
> existing search engines and new competitors start providing it.
> You would then simply submit your descriptions to these services as
> you today submit your URLs for indexing.

> Image all the nice (semantic) processing that could be done on the
> millions of resource descriptions....

I agree. But see above.



> Anyway, just a thought.
> Jan
> -- 
> Jan Algermissen                 
> Consultant & Programmer	        

Received on Tuesday, 25 November 2003 13:53:24 UTC