Re: Agent-mediated access (was Re: Criticism of Kidcode...)

Greetings and salutations.

On June 19th, Peter Deutsch wrote:

<snip>
> Actually, the first testbed application is essentially
> done, although it doesn't use proxy servers or work at the
> individual protocol level. This is exactly the kind of
> thing we envision for Uniform Resource Agents, which are
> objects which we propose as a mechanism for packaging up
> net expertise.

I agree strongly that designing the functional
architecture at the protocol level is a straitjacket
(see [1] for some of our ideas on this subject).

Take another look at the item I recently posted on
searching using URCs encapsulating scripts.  These
provide an alternative way of implementing (potentially
mobile) agents within the Web, with the advantage that
URCs are starting with a clean slate so we can figure
out how to do it right first time.

> As we define them, URAs are capable of searching,
> accessing and filtering available Internet resources
> without requiring the user to provide, or even be aware
> of, specific access mechanisms. We don't use proxies,
> since we think architecturally it makes more sense to move
> the agent manipulation onto the desktop, as this is the
> cheapest resource the user can access. Still, I think they
> can do what you want out of the box.

Assuming that the desktop is the most desirable place
for processing could well be a shaky assumption in the
long term.  Mobile computing models are moving the
other way (the batteries last longer if the processing
load is lighter :-).

When you say "...don't use proxies..." does that mean
your model _will_not_ use proxies, or _need_not_ ?

<snip>
> Architecturally what distinguishes URAs from other agent
> proposals, such as Knowbots or Hot Java, is that we focus
> on URAs as objects that exist and are managed in the
> client's local space. When invoked, a URA probes out to
> the net for searching and access. Results are then
> accepted, sorted and/or filtered and then presented to the
> user. We do hope/expect to see URA authors sharing new URAs
> across the net, but normally we forsee users invoking
> their own copy of a URA, which has been presumably
> tested/approved for local use.

I'd appreciate you explaining further why the
management policies that you apply to URAs depend on
their location (which is what the above seems to say).

One of the benefits of object technology (and one of
the reasons why the Web needs it) is that it allows
transparency with respect to factors like location,
replication &c &c. 

> With this approach we do not require trusted access to
> outside servers to run (a la Knowbots), nor do we assume
> that we will be down-loading applets from non-trusted
> outside servers (a la Hot Java).

Both of these models are valid for a variety of cases.
They should be as implementable as the RPC model you
use for URAs.

> More importantly, we see URAs as existing at an
> operational layer above access protocols. With our first
> client we have already integrated them with existing
> Mosaic-based browsing technology, so that users will use
> MOsaic or Netscape to browse individual items. At the same
> time, we aren't forcing users to use an external server for
> mediated access. We think this is necessary for scaling,
> if nothing else.

Different kinds of agents can exist.  Some might be
confined to layers, and work in teams, like when agent
A asks agent P to establish a protocol between it and
server S, then hand back the completed interface that S
has on that protocol.  Other might be more rounded
characters, adapted to sparser environments (like if I
only accept a single agent from your domain into my
domain at a time for monitoring purposes).  These
latter agents will need to be able to drop back to
protocol layer activity in order to make progress in
some contexts.

> The results lists generated by URAs in our client are
> presented to the user as a set of headlines (you can think
> of them as a form of URN, since they name resources without
> specifying access, although they don't use any of the
> current URN proposals yet). Most importantly, with our
> approach there is is not a URL in sight during the initial
> specification and access.  Once the user invokes an object
> they are prompted for any needed information (eg. search
> terms if it is a search) and then the access is performed.
> Results are then presented as a set of headlines for
> viewing, and once a headline is selected, the associated
> URL is passed to the browser (currently Mosaic/Netscape)
> for access. 

I find this all pretty neat, but it makes a lot of
assumptions about user interactions, and builds the
external interface into the URA interface architecture.

Have you considered doing presentation separately from
invocation/termination?

> This approach effectively hides URLs (and thus access
> info) from the user entirely until the client hits the
> browser step. If you provide a suitable browser, the
> physical access information need never appear to the user
> at all. We feel that among other things, this will go a
> long way to providing client-driven client-driven content
> filtering, which seems to be the only effective way to
> implement content control on the net. At the same time, it
> will free users from thinking of their information in
> terms of access at all, which is our greater and more
> important goal.

Strongly agree that filtering is a client job [1].

Not sure that you can ever free users from the need to
consider access - location, yes.  For one thing, some
users will be other agents, for another, mixed-content
information generally requires a high degree of
decision enforcement.  The best one can do is to
provide selective transparency for access.

(Brian wrote:)
> } I've outlined many of these thoughts in a short paper at
> } 
> } http://www.organic.com/Staff/brian/community-filters.html
> 
> Got the paper and and the only problem I have with it is
> the assumption that architecturally we want everything
> going through HTTP proxy servers, as these form a natural
> bottleneck and leave the user with a browser that can
> still effectively see the entire world. From our
> perspective, it also is suboptimal since it requires users
> to continue viewing the net in terms of access protocols
> ("http://" indeed). I want users selecting items based
> upon names like "Stock Quoter" or "Book Search". Let the
> object figure out how to find the server.

The object world uses traders to find other objects.
In many ways, traders function similarly to proxies, at
least as far as the client is concerned.  (Of course
you generally want access to multiple traders.)

Reference [1] has links to ideas on how to use proxies
as part of the migration path away from explicit access
protocols, and also on the desirability of decoupling
transport protocol and access scheme.

> I'm happy to use servers to supply a URA to the client for
> execution, but want the executing code to be as close to
> the user as possible. Otherwise we can expect scaling
> problems with proxies being swamped by demand, and
> security problems since users are still essentially armed
> with a generalized browser and can potentially see the
> entire net if your filtering fails.

The concept of closeness to the user doesn't stand up
for long if you look at it really closely.  The system
has multiple representations of the user, not all of
them the same.  My mail persona differs enormously from
my login persona.  But that's not as important as the
fact that the scaling argument is weak.  If you need to
locate resources transparently, then you need to use
some form of trading.  The scaling problem occurs at
the level where the traders are interconnected.  Making
them scale is a problem that is theoretically
interesting but practically very well solved.

The security problems generated by mobile code aren't
very different from RPC requests as far as access goes:
if I can fake my clearance then I can access things I
don't have a right to know.  The real question is
whether I can damage something of yours.  The
technology clearly exists to handle this problem (Java,
Safe-Tcl, Obliq, Telescript(TM) &c).

> This doesn't mean all processing should be on the client
> machine, but we should look at approaches that move at
> least some of it onto the desktop since, as I said earlier,
> it's the cheapest net resource a user has access to.

As pointed out above, whether this statement is true,
or will remain true for very long, is debatable.  It
depends so strongly on a particular computational model
that it can't be allowed to influence long-term
architectural decisions.

References

[1] <URL:http://www.ansa.co.uk/phase3-activities/>

[2] Madsen, Fogg & Ruggles. Libri 44(3):237-257.

--
________________________________________________________________________
Mark Madsen: <msm@ansa.co.uk> <URL:http://www.ansa.co.uk/Staff/msm.html>
Information Services Framework, The ANSA Project, APM Ltd., Castle Park,
Cambridge CB3 0RD, U.K.  <URL:http://www.ansa.co.uk/>;  <apm@ansa.co.uk>
Voice: +44-1223-568934; Reception: +44-1223-515010; Fax: +44-1223-359779

Received on Tuesday, 20 June 1995 04:41:02 UTC