Agent-mediated access, kidcode critiques, and community standards

	It seems that the concerns of the groups involved in the
discussion of the KidCode proposal are now on several very different
wavelengths about the proper direction for the discussion.  Some are
making criticisms and suggestions about that proposal, and accept the
notion that there is an immediate, pressing need to generate a
reasonable voluntary labelling scheme (in order to avoid the
imposition of something much worse); others wish to see the discussion
placed in the context of content-labelling and resource discovery and
see the parental control issues as a small part of those issues;
others wish to have us examine the role technologists play in enabling
the content-control of the net.  Each of these is a valid viewpoint,
and not well served by accusations of being mere FUD or
obstructionism.  We are, at the base, discussing how the content of
the Web will be understood and how we will interact with it in the
future; these are concerns as important as they are immediate.

	For those who believe that there is an immediate need to
establish a reasonable, voluntary labelling scheme, Martijn Koster and
Ronald Daniel have made very cogent arguments about the need to keep
access control, labelling, and subject description apart.  Ronald
Daniel's discussion of URC indicates a working group which is dealing
with this issue in a very cogent and complete manner.  For those who
need something more immediately, Martijn's work with Robot exclusion
may provide a workable, ready-to-hand solution.  According to the
Robot exclusion standard, robots check for a file called "robots.txt"
at any site they traverse; it lists user agent, then the partial URL
not to be visited (for the complete scoop on this, see
http://web.nexor.co.uk/users/mak/doc/robots/norobots.html ).  This
information is passed to the robot and parsed.  A very similar
solution could be used for the purpose envisioned by KidCode--a
browser could be set to always check for a file (or files) with the
access information, before traversing a site.  As a very quick, very
dirty method, you could simply have two files: kids.txt and
adults.txt, which advertise either that the site is meant for children
or meant for adults; a better solution, I believe, even in the
interim, would be an "audience.txt", which lists which part of the
site contains information appropriate to which audiences.  This method
draws on a body of working code (in Harvest, webcrawler, lycos, etc.),
and would not break any existing system.  It is also a very good example
of a completely voluntary standard being adopted by its community.

	Ultimately, of course, it's a hack, and it would need to be
replaced.  Several interesting methods for replacing it have been
discussed; I believe that the URC method with SOAPs covers the ground
very well.  I have also request to beta test the "Silk" URA, and I
encourage others interested in this issue to do the same; if it can
create the kind of bounded groups it intends.  I am particularly
interested to see how this URA implementation deals with the demands
of a user wanting a full result for a search, while still filtering
the results, since full results are often achieved by creating fuzzy
bounds.  I believe that Brian Behlendorf's proposal is workable as a
commercial venture, right now, but I do worry that its view of proxy
caches and firewalls at a very local level may be unworkable.  As part
of the GLOBE project, I've worked with several hundred schools setting
up web access for teachers, and very, very few of the schools would be
able to handle caching proxy servers or firewalls with the current
technologies.  Significant improvements in ease-of-use in those
supporting technologies would be needed before that would be possible.

	For those worried about that content-labelling may lead to attempts
to censor based on a certain set of community standards, it seems that
working toward a solution similar to the "Silk" proposal may be in order.  
If I understand it correctly, it works to allow the individual user to bound
realms of knowledge in particular ways; this is certainly better than
allowing any community to impose its standards on available knowledge.  It
does run the risk that those too lazy to create their own realms of knowledge
will blindly take up those created by others, but there is nothing, really
we can do about that.  

	We do risk a great deal at this juncture.  The Internet
protocols and applications paradoxically allow both global access and
a very narrow focus.  It is already possible for someone using the Net
to restrict their focus to sites, mailing lists, and groups which
promote very narrow views of the world; at least now, however, this is
a decision made by the user on what view to adopt after seeing the
available choices, rather than a world view built into the tools
through which they access the Net.  As we work towards solutions, I
believe retaining that fine line is a worthy goal.  We should be
making it easier for people to find those things which interest them.
We should be making it possible for people to ignore those things
which don't interest them.  We should not be choosing for them what
might interest them or what they should ignore, and interim solutions
we may have that do that should make clear that even the author's
opinion of a work should not be its sole reference for appropriate
audience.

			Regards,
				Ted Hardie
				NASA NAIC

Received on Tuesday, 20 June 1995 14:24:04 UTC