W3C home > Mailing lists > Public > uri@w3.org > June 1995

Re: Criticism of Kidcode (was Re: KidCode: Next steps )

From: Brian Behlendorf <brian@organic.com>
Date: Mon, 19 Jun 1995 14:39:58 -0700 (PDT)
To: Martijn Koster <m.koster@nexor.co.uk>
Cc: Nathaniel Borenstein <nsb@nsb.fv.com>, rating@junction.net, www-talk@www10.w3.org, uri@bunyip.com
Message-Id: <Pine.3.89.9506191441.J28468-0100000@eat.organic.com>
On Mon, 19 Jun 1995, Martijn Koster wrote:
> I think the KidCode solution is technically the wrong way to do it,
> because it changes the nature of a URL, which from the Web's
> conception has been nothing more than a location, to include an access
> policy.
> I'd strongly urge this group to consider a resource's location and
> access policy as seperate bits of information.

I strongly agree.  I've been thinking a lot about the situation, and it 
seems to me that a simple solution would be to combine a filtering 
application with an existing HTTP (or other protocol) proxy server.  This 
"filtering application" would consist of an access control list of either 
good URL regexps (thus allowing only URLs which match the regexp) or bad 
URL regexps (only disallowing passage of particular URLs).  These lists 
could be created by any number of administration levels - by the teacher 
at a classroom level, by a group of teachers at the school district 
level, by the Department of Education or the PTA, or even commercially by 
third parties.  The filters could be combined as well, and updated using 
HTTP transaction.  I really don't believe this is a huge technological 
problem - I think one could take the CERN or TIS proxy and with 4 
engineer-months create a filtering application.  

I've outlined many of these thoughts in a short paper at


It doesn't specify anything enough to be an RFC, but I would be happy to 
work with people who'd like to come up with the syntax and protocols for 
these ACL's and the inter-ACL transactions.  Please read this document 
before responding to this post, there are some delicate issues I address 
more fully there.

One thing my paper doesn't discuss, which Nathaniel's RFC focuses on, is 
self-evaluation by content providers.  I find that a very difficult 
situation, since any system like that implies enforcement by some 
agency... To accomplish the same thing I might be in favor of a new HTTP 
header as Martijn is, but I think Keywords: could also be used.

We looked at doing this as a software development project, but the 
product liability issues are absolutely enourmous, so we're concentrating 
on other things.  We would be willing to help support a public-domain 
development effort, a la Apache and VRML.  


brian@organic.com  brian@hyperreal.com  http://www.[hyperreal,organic].com/
Received on Monday, 19 June 1995 17:39:39 UTC

This archive was generated by hypermail 2.4.0 : Sunday, 10 October 2021 22:17:31 UTC