W3C home > Mailing lists > Public > ietf-http-wg-old@w3.org > January to April 1997

Re: determining proxy reliability

From: Patrick McManus <mcmanus@appliedtheory.com>
Date: Wed, 19 Mar 1997 11:13:16 -0500 (EST)
Message-Id: <199703191613.LAA02148@pat.appliedtheory.com>
To: Jeffrey Mogul <mogul@pa.dec.com>
Cc: mcmanus@appliedtheory.com, http-wg@cuckoo.hpl.hp.com
In a previous episode Jeffrey Mogul said...
::     
:: It is true that there is no technical mechanism in the hit-metering
:: proposal to prevent a proxy from agreeing to hit-meter a response,
:: and then not doing so.  The proposal states MUST-level requirements,
:: but provides no means to verify that they are always observed.
:: 
:: But this is not any different from any other HTTP protocol requirement.

Jeff,

  I'm starting to see this in a new light, your argument about
protocol trust is a good one. In summary non reliable worries me more
than non compliant, read on. What I'm still hesitant on is what I
feel will be a very strong content-provider hesitation to this
proposal because it's accuracy is so unbounded. While solving that
problem precisely is extremely difficult I think that something needs
to be done to reduce it.. consider this:

  A content provider implements hit-metering instead of doing their
current cache busting technique.. they're most pleased as the server
load drops by 50% (as does potentially WAN traffic) but they also take
a 10% drop in usage numbers (after aggregation of all the proxy
reports).. if they get paid on a linear model they could lose 10% of
their income.. on quota systems it could be even worse, so they
immediately revert back to cache busting and their numbers return to
their expected values.

It's a shame really, because we should all win by them using hit
metering.. not only is net traffic reduced but I would think that
their access numbers (if reported properly) could actually go up as
site availabilty isn't always at the whims of WAN reliability as local
caching enters the picture.

What they'd like to do is not allow caching to non compliant or non
reliable caches and do so for the rest.. I think that non reliable
here is more important than non compliant.. I'm not worried about the
proxy's algorithm as much as I am machines that are perpetually
rebooted or have their proxies restarted by itchy sys admins.. or even
machines that devote a fixed amount of resources to storing these
counts and when those resources are exhausted can't report them to the
origin server because of network unreachability and drop the report on
the floor instead.. certain environs are prone to being chronic with
this sort of thing.

I made a proposal months ago about being able to (at the origin
servers option) force the return of 0/0 counts.. at least this would
allow the construction of deterministic audit trails and therefore some
notion of reliability.. it doesn't account for outright fraud by the
proxy of course (they could misreport the numbers) but it does close
the case of any open ended transactions.. I'm not sure that it is
enough, but I do think it helps considerably in establishing 'good
faith and a reliable history' which is something to go on..

-Pat, not feeling bad about bringing this back up when it's still in
ID and considering we can do 50 messages a day on cookies that are
nearing last call..
Received on Wednesday, 19 March 1997 08:15:52 EST

This archive was generated by hypermail pre-2.1.9 : Wednesday, 24 September 2003 06:32:31 EDT