W3C home > Mailing lists > Public > ietf-http-wg@w3.org > October to December 2016

RE: Call for Adoption -- Cache-Control: immutable

From: Mike Bishop <Michael.Bishop@microsoft.com>
Date: Thu, 8 Dec 2016 20:27:46 +0000
To: Martin J. Dürst <duerst@it.aoyama.ac.jp>, "Mark Nottingham" <mnot@mnot.net>, HTTP Working Group <ietf-http-wg@w3.org>
CC: Patrick McManus <mcmanus@ducksong.com>
Message-ID: <BN6PR03MB2708326D0067AE61B632300287840@BN6PR03MB2708.namprd03.prod.outlook.com>
Shift-Reload is different still -- all requests go out without looking at the cache in the first place.  So you see unconditional requests, rather than revalidations, and always get fresh content.  If those responses are cacheable, they still get added to the cache later, though.

The one situation I've thought of is that a change to browser behavior only gets the updated clients, whereas "immutable" supported in a proxy can effectively help every client behind them, updated or not.  However since immutable restricts itself to HTTPS, which is less likely to be proxied, how *much* larger is a very open question.

-----Original Message-----
From: Martin J. Dürst [mailto:duerst@it.aoyama.ac.jp] 
Sent: Wednesday, December 7, 2016 9:20 PM
To: Mike Bishop <Michael.Bishop@microsoft.com>; Mark Nottingham <mnot@mnot.net>; HTTP Working Group <ietf-http-wg@w3.org>
Cc: Patrick McManus <mcmanus@ducksong.com>
Subject: Re: Call for Adoption -- Cache-Control: immutable

On 2016/12/08 13:55, Mike Bishop wrote:
> I'm generally favorable toward this idea, but will note one open question in my mind:  This seems to be very tightly tied to the scenario of hitting refresh on a page whose content frequently changes but whose dependent resources don't.  Putting "immutable" on those dependent resources helps reduce the server load and time taken when the user hits refresh, either in their own local cache or in proxies that are on path to the site.
>
> There seems to be a parallel discussion (see https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_document_d_1vwx8WiUASKyC2I-2Dj2smNhaJaQQhcWREh7PC3HiIAQCo_edit&d=DgMFAg&c=5VD0RTtNlTh3ycd41b3MUw&r=1l7nWo9Y5pZ_Fce4oaurZQ&m=AlkS3R79U-PYonxL1dpzJx-7U842dQ1ecXQodjpgPSo&s=IRSkaXwsZPN79a5lIo4n-SJrwvSNDe2QQF3XichUZXo&e= for Chrome's) about softening the behavior of the refresh button to avoid force-refreshing all dependencies, which would likely have the same results.  Can someone point me to a scenario in which both are worth doing, or is this really a pair of mutually-sufficient solutions to the same problem?

There's a big difference between what is ideal (or close to ideal) for development and for production,...

For development, you want the "always reload everything" behavior. For production, you hopefully don't need that anymore. So the browser-side solution with a switch between "really reload everything" and "reload, but with moderation" could work. I think that at some time, Ctrl-R did the later, and Ctrl-Shift-R did the former on some browsers.

There are other tricks you can play. Some frameworks (e.g. Rails) add a number for a "stable" resource and increase the number every time that resource is changed.

Overall, it's probably possible to imagine an ideal world where only one such mechanism is used, but because neither browsers nor servers (and the stuff served by them) are perfect, we usually end up with needing more than one mechanism.

Regards,   Martin.
Received on Thursday, 8 December 2016 20:28:22 UTC

This archive was generated by hypermail 2.3.1 : Thursday, 8 December 2016 20:28:26 UTC