W3C home > Mailing lists > Public > www-talk@w3.org > May to June 1995

Re: Agent-mediated access, kidcode critiques, and community standards

From: Martijn Koster <m.koster@nexor.co.uk>
Date: Tue, 20 Jun 1995 22:23:24 +0100
Message-Id: <9506202125.AA02977@www10.w3.org>
To: hardie@merlot.arc.nasa.gov (Ted Hardie)
Cc: brian@organic.com (Brian Behlendorf), peterd@bunyip.com, rating@junction.net, www-talk@www10.w3.org, uri@bunyip.com

Ted,

first of all thanks for your summary, and for putting the
discussion back into a constructive sphere.

In message <199506201827.LAA09250@merlot.arc.nasa.gov>, Ted Hardie writes:

> 	For those who believe that there is an immediate need to
> establish a reasonable, voluntary labelling scheme, Martijn Koster and
> Ronald Daniel have made very cogent arguments about the need to keep
> access control, labelling, and subject description apart.  Ronald
> Daniel's discussion of URC indicates a working group which is dealing
> with this issue in a very cogent and complete manner.  For those who
> need something more immediately, Martijn's work with Robot exclusion
> may provide a workable, ready-to-hand solution. [model deleted]

A few comments.

I only mentioned URC's in my original post fleetingly because they're
only experimental, and require an infrastructure that is not there yet
(and because I'm not 100% up-to-date with their status :-). I agree
with the URC proponents that they give lots of flexibility that is
appropriate for this subject matter, and I expect that's the way it'll
go. But given that the need for a solution is immediate it may be
worth considering other/intermediate solutions.

The 'audience.txt' variant of the 'robots.txt' would work as Ted
describes, I assume one would use user-agents or proxies to do the
filtering. However, in comparing clients and robots you are comparing
apples and oranges. There are a few reasons that robots.txt (which
definately is a hack) actually works:

- There are few robot authors, and they have zero deployment time
  because they generally don't have a market for the gathering
  software, but run it themselves.
- There are few robot visits, and robots usually make many requests,
  so that the retrieval of an extra file doesn't really impact
  performance or bandwidth
- Sites generally place access policies on entire areas, or a few
  specific URL's

The labeling we are discussing is quite different. There are many
client software authors, with a long time to market, and a desire not
to distribute hacks (with a few exceptions :-) as old software is used
for ages. There are many client visits to many servers, so that the
/audience.txt retrievals would be considerably more noticeable. When
it comes to labeling content to the granularity proposed by Kidcode,
we are no longer talking about a few areas or a few URL's per server,
and may quickly get scaling problems.

So I would advise against proposing an /audience.txt as an interrim
solution.

My suggestion of using a KidCode HTTP header didn't provoke much
response, while I think it has some advantages: the user gets the
choice, it scales, it can be added to exisiting code easily, scales,
doesn't require a third party infrastructure, and will be quite easy
to establish as a standard since it is a simple extension to http. It
can also easily coexist with community schemes.

I'd appreciate some feedback: is the lack of support for protocols
other than HTTP perceived to be a big problem? Will UR[CA]
infrastructure takes as much time to deploy as adding a header to
existing code? Is the rush for an interrim solution justified? Is an
HTTP header a good idea?

Darren wrote:

} So why haven't you proposed it and sold it to some browser vendors?
} Why say "You're wrong" without saying "this is right"? Propose a
} standard, write some code, lead, follow, get out of the way, or
} something.  :-)"

Well, first of all I'm not the one with the direct requirement, and
secondly I had hoped for some more informal and constructive
discussion. I didn't claim "this is right" but did suggested what
might be better, not necesarrily the best.  If I get some positive
feedback I am quite willing to write up a KidCodeHeader draft. If not,
great, I'm looking forward to more advance solutions.

BTW, I find it a pity that few client writers seem to be joining in
these debates... I hope they're listening.

Regards,

-- Martijn
__________
Internet: m.koster@nexor.co.uk
X-400: C=GB; A= ; P=Nexor; O=Nexor; S=koster; I=M
WWW: http://web.nexor.co.uk/mak/mak.html
Received on Tuesday, 20 June 1995 17:25:58 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Wednesday, 27 October 2010 18:14:17 GMT