Re: dont-revalidate Cache-Control header

On Tue, Jul 14, 2015 at 10:12 AM, Amos Jeffries <>
> > Amos, not sure I follow the proxy conclusion.. I'm reading this
> correctly,
> > it sounds like if I specify a 1 year+ max-age, then Squid will revalidate
> > the object for each request?
> No, for the first year you get normal caching behaviour. Then from the
> 1yr mark you get one-ish revalidation, and the new copy is used the
> resets the 1yr counter. So instead of getting things cached forever /
> 68yrs (possibly by error). You get at least one revalidation check per
> year per object.

We are seeing this behavior on objects that have not existed for 1 year.
So I don't think we are triggering that behavior.

>> One major issue with this solution is that it doesn't address situations
> >> where content is embedded in a third party site. Eg, if a user includes
> an
> >> API like Google Maps or the Facebook like button those APIs may load
> >> subresources that should fall under this stricter policy. This issue
> cuts
> >> both ways -- if 3rd party content on your site isn't prepared for these
> >> semantics you could break it.
> >
> >
> > Hmm, I think a markup solution would still work for the embed case:
> > - you provide a stable embed URL with relatively short TTL (for quick
> > updates)
> > - embedded resource is typically HTML (iframe) or script, that initiates
> > subresources fetches
> > -- said resource can add appropriate attributes/markup on its
> subresources
> > to trigger the mode we're discussing here
> >
> > ^^ I think that would work, no? Also, slight tangent.. Fetch API has
> notion
> > of "only-if-cached" and "force-cache", albeit both of those are skipped
> on
> > "reload", see step 11:
> >

One version of the "markup option" could be allowing us to explicitly
override the Fetch attributes of a subresource. Eg, set the cache-mode to
force-cached. This option does present the challenge of ensuring that we
address every possible subresource (think url()'s in css). Being able to
control fetch options for sub resources is extremely useful.

That said, this doesn't feel like a great thing for us to promote as a web
performance best practice. "If you use long cache lifetimes for your static
content, the dont-revalidate cache control header will reduce the cost of
client reloads" seems like a piece of advice folks might take, as would
"Use the <meta> tag 'dont-reload-non-expired-resources' to avoid browsers
revalidating your content when the user presses reload". On the other hand
"you should find every image, script, stylesheet, etc and set the fetch
option on each to say force-cached" feels more tedious and unlikely to be

> On Mon, Jul 13, 2015 at 2:57 AM, Ben Maurer wrote:
> >
> >> We could also study this in the HTTP Archive -- if I took all resources
> >> that had a 30 day or greater max age and send their servers revalidation
> >> requests 1 week from today, what % of them return a 304 vs other
> responses.
> >
> >
> > Not perfect, but I think it's should offer a pretty good estimate:
> >
> >
> > - ~48% of resource requests end up requesting the same URL (after 30
> days).
> > Of those...
> > -- ~84% fetch the same content (~40% of all request and ~33% of total
> bytes)
> > -- ~16% fetch different content (~8% of all requests and ~9% of total
> bytes)

Is it possible to limit this to only resources which claimed a > 30 day
max-age in the first request?


Received on Tuesday, 14 July 2015 10:03:40 UTC