W3C home > Mailing lists > Public > www-talk@w3.org > May to June 1995

Agent-mediated access (was Re: Criticism of Kidcode...)

From: Peter Deutsch <peterd@bunyip.com>
Date: Mon, 19 Jun 1995 20:46:37 -0400
Message-Id: <9506200046.AA11021@expresso.bunyip.com>
To: Brian Behlendorf <brian@organic.com>, Martijn Koster <m.koster@nexor.co.uk>
Cc: Nathaniel Borenstein <nsb@nsb.fv.com>, rating@junction.net, www-talk@www10.w3.org, uri@bunyip.com, leslie@bunyip.com
g'day,

Hmmm, I looked at the CC list and asked myself which to
cut, but since I'm not on all those lists and would thus
miss certain replies, I'll beg the reader's indulgence for
just a minute. I want to focus on the agent component of
the previous posting and ask that follow-ups to this new
thread go to the URI list, which is where we are currently
persuing this work.

In this posting I'm going to provide some details of URAs
(our proposal for Uniform Resource Agents) and Silk (our
first application which supports URAs). If you don't want
to see this, hit your equivalent of `n' now...


[ Brian wrote: ]

} .  .  .  I've been thinking a lot about the situation, and it 
} seems to me that a simple solution would be to combine a filtering 
} application with an existing HTTP (or other protocol) proxy server.  
} .  . . The filters could be combined as well, and updated using 
} HTTP transaction.  I really don't believe this is a huge technological 
} problem - I think one could take the CERN or TIS proxy and with 4 
} engineer-months create a filtering application.  

Actually, the first testbed application is essentially
done, although it doesn't use proxy servers or work at the
individual protocol level. This is exactly the kind of
thing we envision for Uniform Resource Agents, which are
objects which we propose as a mechanism for packaging up
net expertise.

As we define them, URAs are capable of searching,
accessing and filtering available Internet resources
without requiring the user to provide, or even be aware
of, specific access mechanisms. We don't use proxies,
since we think architecturally it makes more sense to move
the agent manipulation onto the desktop, as this is the
cheapest resource the user can access. Still, I think they
can do what you want out of the box.

We are currently testing "Silk", our first application for
this URA technology, and hope to turn a version loose to
the net in the next few weeks.  Meanwhile, it's available
for willing beta-testers now.

Architecturally what distinguishes URAs from other agent
proposals, such as Knowbots or Hot Java, is that we focus
on URAs as objects that exist and are managed in the
client's local space. When invoked, a URA probes out to
the net for searching and access. Results are then
accepted, sorted and/or filtered and then presented to the
user. We do hope/expect to see URA authors sharing new URAs
across the net, but normally we forsee users invoking
their own copy of a URA, which has been presumably
tested/approved for local use.

With this approach we do not require trusted access to
outside servers to run (a la Knowbots), nor do we assume
that we will be down-loading applets from non-trusted
outside servers (a la Hot Java).

More importantly, we see URAs as existing at an
operational layer above access protocols. With our first
client we have already integrated them with existing
Mosaic-based browsing technology, so that users will use
MOsaic or Netscape to browse individual items. At the same
time, we aren't forcing users to use an external server for
mediated access. We think this is necessary for scaling,
if nothing else.

The results lists generated by URAs in our client are
presented to the user as a set of headlines (you can think
of them as a form of URN, since they name resources without
specifying access, although they don't use any of the
current URN proposals yet). Most importantly, with our
approach there is is not a URL in sight during the initial
specification and access.  Once the user invokes an object
they are prompted for any needed information (eg. search
terms if it is a search) and then the access is performed.
Results are then presented as a set of headlines for
viewing, and once a headline is selected, the associated
URL is passed to the browser (currently Mosaic/Netscape)
for access. 

This approach effectively hides URLs (and thus access
info) from the user entirely until the client hits the
browser step. If you provide a suitable browser, the
physical access information need never appear to the user
at all. We feel that among other things, this will go a
long way to providing client-driven client-driven content
filtering, which seems to be the only effective way to
implement content control on the net. At the same time, it
will free users from thinking of their information in
terms of access at all, which is our greater and more
important goal.

Silk is the first application we've developed which
manages access to URAs. It provides an object library
manager, a search/access manager and integrated access to
those versions of Mosaic/Netscape which currently provide
a usable API. Silk agents are currently implemented as
suitably structured Tcl scripts which may be customized
and shared by users, although in a future release we plan a
client-server based version of URAs which will hide the
actual implementation of the objects from the user. Our
long term goal is to hide both agent implementation and
access mechanisms such as URLs but we feel there's lots to
be learned about typing of acess results and so on before
this can be done successfully.

As I mentioned at the start, Silk code is now in the hand
of a limited set of beta testers, with a tentative release
date to the net in the next few weeks. We'd be happy to
share it with anyone on this list who wants to examine it.
For more info you can send email to the project manager,
Leslie Daigle at "leslie@bunyip.com".  She's out of the
office this week, but _is_ checking email.  If you CC me
on the mail, I'll see someone replies as soon as possible.

} I've outlined many of these thoughts in a short paper at
} 
} http://www.organic.com/Staff/brian/community-filters.html

Got the paper and and the only problem I have with it is
the assumption that architecturally we want everything
going through HTTP proxy servers, as these form a natural
bottleneck and leave the user with a browser that can
still effectively see the entire world. From our
perspective, it also is suboptimal since it requires users
to continue viewing the net in terms of access protocols
("http://" indeed). I want users selecting items based
upon names like "Stock Quoter" or "Book Search". Let the
object figure out how to find the server.

I'm happy to use servers to supply a URA to the client for
execution, but want the executing code to be as close to
the user as possible. Otherwise we can expect scaling
problems with proxies being swamped by demand, and
security problems since users are still essentially armed
with a generalized browser and can potentially see the
entire net if your filtering fails.

This doesn't mean all processing should be on the client
machine, but we should look at approaches that move at
least some of it onto the desktop since, as I said earlier,
it's the cheapest net resource a user has access to.

.  .  .
} We looked at doing this as a software development project, but the 
} product liability issues are absolutely enourmous, so we're concentrating 
} on other things.  We would be willing to help support a public-domain 
} development effort, a la Apache and VRML.  

I'm not sure I'm as pessimistic as you are on the product
liability front, but our emphasis has been on providing an
environment in which agents can be used, rather than
providing specific agents. I'm hoping/expecting developers
for specific communities to supply their own agents, tuned
to their needs.  We're also not generally focused on the
issue of censorship, viewing this as a spinoff of the agent
mechanism, rather than the goal of our work. What we
really want to do is hide access mechanisms and get users
thinking in terms of information, not access protocols. If
this permits users to more easily develop content filters,
that's a bonus for us.


					- peterd



-- 
------------------------------------------------------------------------------

     ...there is reason to hope that the machines will use us kindly, for
     their existance will be in a great measure dependent on ours; they will
     rule us with a rod of iron, but they will not eat us...

                                               - Samuel Butler, 1872
------------------------------------------------------------------------------
Received on Sunday, 25 June 1995 22:17:39 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 27 October 2010 18:14:17 GMT