Re: Use hash to reduce traffic

And how much would compression reduce the data transfered? Not mutually
exclusive, I know, but a solution to the issue you describe would be quite
complex to implement, compression would be pretty straight forward and I
suspect reduce bytes transfered by more than the 36% redundancy.

Dave Morris

On Tue, 6 May 2003, Jeffrey Mogul wrote:

>
> Diwakar Shetty <diwakar.shetty@oracle.com> writes:
>
>     Dont we have "Modified-Since" and "Etags" to do this job already ?
>     What will "hash" do extra which is not being done currently by the
>     above mentioned two mechanisms ??
>
> The existing mechanisms don't solve the problem of "aliasing" where
> two different URLs point to the same content, and a related problem
> where a given URL yields content in a sequence like
>
> 	A
> 	B
> 	C
> 	A
>
> These two effects can cause redundant content transfer (that is,
> a hypothetical perfect cache could avoid these transfers).
> We found that these two effects together, in one large trace,
> caused about 36% of the bytes transferred to be "redundant" in
> this sense.  See the WWW 2002 paper I've already cited.
>
> -Jeff
>

Received on Tuesday, 6 May 2003 18:29:46 UTC