W3C home > Mailing lists > Public > www-rdf-logic@w3.org > April 2001

Humans or agents (was RE: A plea for peace. was [...])

From: Peter Crowther <Peter.Crowther@melandra.com>
Date: Thu, 12 Apr 2001 19:05:22 +0100
Message-ID: <B6F03FDBA149CA41B6E9EB8A329EB12D05A12B@vault.melandra.net>
To: "'Aaron Swartz'" <aswartz@swartzfam.com>
Cc: RDF Logic <www-rdf-logic@w3.org>
> From: Aaron Swartz [mailto:aswartz@swartzfam.com]
> pat hayes <phayes@ai.uwf.edu> wrote:
> > Yes, of course. It works for the millions of HUMAN users of the Web.
> > This is not surprising, since names (not URI's, just plain names)
> > have worked for the hundreds of millions of human beings who have
> > been using language since before the Neolithic. (The Web hasnt added
> > anything to the human use of language; it has just enabled us all to
> > listen to more of it.) But this entire discussion on RDF is 
> about how
> > to arrange things so that SOFTWARE AGENTS can use information on the
> > web, not human beings.
> Of course! But, to my knowledge, these SOFTWARE AGENTS are 
> all programmed
> and run by HUMAN BEINGS (or at least, there is always a human 
> at the top of the chain of command).

Yes, at the top of the chain of command; but no, that human may have no
involvement in the detail.  Check out some of the work at Southampton
University (http://www.ecs.soton.ac.uk) on agents that are capable of
negotiating autonomously.  Also the Scientific American article by Tim
Berners-Lee, James Hendler and Ora Lassila at
http://www.scientificamerican.com/2001/0501issue/0501berners-lee.html (kudos
goes to all three authors; we're asking the magazine for reprints :-).

More generally, the whole idea of the current crop of software agents is
that you can delegate to them jobs that a human would previously have had to
carry out.  This is difficult if they continually have to come back to the
original human for clarification, and pretty near impossible if they have to
ask other humans around the Web [as opposed to other agents] for
clarification.  If they are going to be flexible systems that can deal with
unfamiliar structures, they will need to be able to obtain and use
appropriate models of those structures --- such as the model-theoretic
semantics proposed by the DL proponents.

If you cannot distinguish the results of querying the model from the results
of querying the world being modelled (modulo any problems with the scope of
the model), then that model can be used as a surrogate for the world being
modelled.  An agent using that model may indeed have been programmed by
humans, and may or may not be run by human beings, but those become
irrelevant; it has access to a powerful structure because it may no longer
need to get an opinion from a human (or, indeed, from another software
system) about the meaning of a concept.  Instead it can consult a model
relating concepts and, providing it doesn't wander outside the scope of the
model, it can obtain an opinion identical to the one the model's originator
would give.  Or it isn't a powerful enough model for the questions being
asked (which will always be the case for some questions, but it shouldn't
stop the use of this approach).

		- Peter
Received on Thursday, 12 April 2001 14:05:45 UTC

This archive was generated by hypermail 2.3.1 : Wednesday, 2 March 2016 11:10:34 UTC