RE: Hit-metering: to Proposed Standard?

>----------
>From: 	Roy T. Fielding[SMTP:fielding@kleber.ICS.UCI.EDU]
>Sent: 	Wednesday, November 20, 1996 4:35 PM
>To: 	Jeffrey Mogul
>Cc: 	http-wg%cuckoo.hpl.hp.com@hplb.hpl.hp.com
>Subject: 	Re: Hit-metering: to Proposed Standard? 
>

[snip...]

>The other harm I mentioned is the implicit suggestion that "hit-metering"
>should be sanctioned by the IETF.  It should not.  Hit metering is a way for
>people who don't understand statistical sampling to bog down all requests
>instead of just those few requests needed to get a representative sample.

Are you saying there's a way right now with the HTTP to do statistical
sampling?

Or are you implicitly proposing
	Cache-control: proxy-revalidate;stale-probability=.01
(where the new directive "stale-probability=.01" (spelled however you
like) means that the cache should make an entry be stale with
probablility .01 at each access; this is combined with the usual meaning
"proxy-revalidate" to pass a random sample of requests along to the
origin server). If so, it's an intriguing idea.

>Whether or not some ISP customers want it does not change the fact that
>it is damaging to the community as a whole, and it's a lot better to inform
>people on how not to be a "scum sucking pig" than it is to have a proposed
>standard on slightly-less piggish ways to be a pig.

Nonesense. If "people" won't do the first and will do the second, then
getting them to do the second will mean that the community is better off
than if they just remain "scum sucking pigs". Damage to the community as
a whole only follows if proposing the second is the _cause_ of them not
doing the first.

Paul

>

Received on Thursday, 21 November 1996 17:02:13 UTC