W3C home > Mailing lists > Public > ietf-http-wg@w3.org > January to March 2013

Re: Heuristic caching without validators

From: Roy T. Fielding <fielding@gbiv.com>
Date: Wed, 16 Jan 2013 23:44:33 -0800
Cc: "Adrien W. de Croy" <adrien@qbik.com>, "HTTP Working Group" <ietf-http-wg@w3.org>
Message-Id: <12DDC8BC-6666-4960-82A9-D8B73A03D8F4@gbiv.com>
To: Mark Nottingham <mnot@mnot.net>
On Jan 16, 2013, at 10:38 PM, Mark Nottingham wrote:
> On 17/01/2013, at 12:23 PM, Adrien W. de Croy <adrien@qbik.com> wrote:
>>> The original intent was to leave this open, AIUI, precisely because most content on the Web doesn't provide any freshness information.
>> There's something a bit disturbing about that.
>> If most web content doesn't provide freshness information (which actually is what we also see), then heuristic freshness calculations are arguably the most important part of caching.
>> Having the most important part left unspecified and open seems like a bit of a problem for interop.
>> Where can a web author or cache implementor go to see what to expect the behaviour would be?  Maybe we need a BCP?
> Well, an algorithm could be suggested in a separate document, but we can't require people to follow it. Lots of effort has gone into tweaking heuristics over the years by various caches, and it's often left up to the administrator (e.g., see refresh_pattern in Squid).

Moreover, heuristics ought to be based on local network needs, such
as the cost/load of network bandwidth, the latency required to
perform a conditional GET, and whether or not the cache is shared
by many users.  The heuristics used by a corporate proxy are going
to be very different from those used on a boat (or a spacecraft, for
that matter).

Received on Thursday, 17 January 2013 07:44:57 UTC

This archive was generated by hypermail 2.3.1 : Tuesday, 1 March 2016 11:11:09 UTC