W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2003

Re: Use hash to reduce traffic

From: David Morris <dwm@xpasc.com>
Date: Tue, 6 May 2003 15:24:07 -0700 (PDT)
To: Jeffrey Mogul <Jeff.Mogul@hp.com>
cc: Diwakar Shetty <diwakar.shetty@oracle.com>, <ietf-http-wg@w3.org>
Message-ID: <Pine.LNX.4.33.0305061519230.2436-100000@egate.xpasc.com>

And how much would compression reduce the data transfered? Not mutually
exclusive, I know, but a solution to the issue you describe would be quite
complex to implement, compression would be pretty straight forward and I
suspect reduce bytes transfered by more than the 36% redundancy.

Dave Morris

On Tue, 6 May 2003, Jeffrey Mogul wrote:

>
> Diwakar Shetty <diwakar.shetty@oracle.com> writes:
>
>     Dont we have "Modified-Since" and "Etags" to do this job already ?
>     What will "hash" do extra which is not being done currently by the
>     above mentioned two mechanisms ??
>
> The existing mechanisms don't solve the problem of "aliasing" where
> two different URLs point to the same content, and a related problem
> where a given URL yields content in a sequence like
>
> 	A
> 	B
> 	C
> 	A
>
> These two effects can cause redundant content transfer (that is,
> a hypothetical perfect cache could avoid these transfers).
> We found that these two effects together, in one large trace,
> caused about 36% of the bytes transferred to be "redundant" in
> this sense.  See the WWW 2002 paper I've already cited.
>
> -Jeff
>
Received on Tuesday, 6 May 2003 18:29:46 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:49:23 GMT