W3C home > Mailing lists > Public > www-rdf-logic@w3.org > June 2001

Agents: Speech or Borg (was RE: performatives and trust)

From: Peter Crowther <peter.crowther@networkinference.com>
Date: Tue, 5 Jun 2001 16:59:49 +0100
Message-ID: <B6F03FDBA149CA41B6E9EB8A329EB12D05A36E@vault.melandra.net>
To: "'Seth Russell'" <seth@robustai.net>
Cc: www-rdf-logic@w3.org
> From: Seth Russell [mailto:seth@robustai.net]
[...]
> I for one do not consider the
> metadata that I encounter on the web anything but speech 
> acts, and believe
> that it would be foolish to believe otherwise.  And most of 
> the knowledge I
> discover on the net in recent months, is not via some search 
> at google, but
> rather by some social transaction.  Such social transactions 
> provide far
> more relivant results.  What we need are atuomated agents to 
> assist our social processes; not some kind of group think.
> 
>   The future internet is *NOT* a communication media for the Borg !!!

An agent is never just 'an agent'.  An agent is an agent *of*
something/someone (referred to as its _principal_), acting as or on behalf
of that principal for some defined purpose.  In order to do that, the agent
needs to share a property with its principal: that it makes the same
decisions as, and gives the same answers as, its principal for the defined
purpose.  A software agent needs enough information about its principal
(whether that be a commerce web site, an individual surfing or a neurotic
AI) to be able to behave as its principal would for the current purpose.
For example, consider a (real-)estate agent acting on behalf of a property
vendor: within the defined purpose, the real-estate agent's job is to behave
as the vendor would, without troubling the vendor where possible.

A human agent, operating at human speeds, can communicate with its principal
--- if an offer comes in that looks promising, the real-estate agent asks
the vendor about it.  Human processes tend to allow for the delays inherent
in this process.  Computer agents --- at least with the current trend
towards instant decision-making --- may not have this luxury, and certainly
will not be useful if they have to consult their principal at each step.
They have to know enough to make decisions on their own; this ideally
requires a Borg-like transfer of knowledge from principal to agent, rather
than intensive teaching.  Failing that, it requires the principal to
identify chunks of knowledge and belief upon which the agent can operate ---
those are then applied by the agent and used as its own knowledge and
beliefs on behalf of its principal.

Speech acts are OK as far as they go, but somehow we have to be able to
bootstrap agents with an understanding of their principal's behaviour for
their defined purpose.  For that, I think speech acts are inappropriate; a
good ol' file load (or equivalent) straight into the agent's knowledge is
much more effective.  The Trust layer in TBL's layered architecture provides
at least some degree of control over this process, so that you don't
simultaneously get your agent trying to 'believe' Discordianism, Paganism
and Jedi all at once just because some chunk of metadata pointed at them.

Once the agent-principal link is established, then I agree with you that
communication between agents is via speech acts or some close analogue, just
as your communication with the Republican Party web site is (probably) a
speech act: they say things, you choose what to use.  The agents are then in
a position to assist their principals --- us --- with our social processes.
But until they're bootstrapped with something they can use to act on our
behalf, they're a handicap.

		- Peter
Received on Tuesday, 5 June 2001 12:00:04 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 7 December 2009 10:52:40 GMT