W3C home > Mailing lists > Public > ietf-http-wg@w3.org > April to June 2003

Re: Use hash to reduce traffic

From: Jeffrey Mogul <Jeff.Mogul@hp.com>
Date: Tue, 06 May 2003 14:12:32 -0700
Message-Id: <200305062112.h46LCWVn007403@wera.hpl.hp.com>
To: Diwakar Shetty <diwakar.shetty@oracle.com>
Cc: ietf-http-wg@w3.org

Diwakar Shetty <diwakar.shetty@oracle.com> writes:

    Dont we have "Modified-Since" and "Etags" to do this job already ?
    What will "hash" do extra which is not being done currently by the
    above mentioned two mechanisms ??

The existing mechanisms don't solve the problem of "aliasing" where
two different URLs point to the same content, and a related problem
where a given URL yields content in a sequence like

	A
	B
	C
	A

These two effects can cause redundant content transfer (that is,
a hypothetical perfect cache could avoid these transfers).
We found that these two effects together, in one large trace,
caused about 36% of the bytes transferred to be "redundant" in
this sense.  See the WWW 2002 paper I've already cited.

-Jeff
Received on Tuesday, 6 May 2003 17:12:39 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Friday, 27 April 2012 06:49:23 GMT