Re: dont-revalidate Cache-Control header

On 18/07/2015 11:10 a.m., Guille -bisho- wrote:
>>
>>
>>> But again, why not just changing the page reload behavior by some
>> directive
>>> on the page reloaded, rather than changing the caching semantics of the
>>> cached objects? Changing the caching semantics to make a url absolutely
>>> permanent is dangerous as we discussed, you can freeze a page.
>>
>> Because a) this is about revalidation (Ctrl+r) rather than reloading
>> (Ctrl+Shift+r) and b) revalidation also happens a lot from non-browser
>> client and middleware. The latter is very unlikely to have even looked
>> at the payload before trying its revalidation. If you put it in the
>> payload it effectively becomes an end-to-end feature only of use to
>> private caches (aka browser and closely related apps).
>>
>>>
>>> My proposal to just specify the reload behavior for subresources
>> (disabling
>>> revalidation) on the page that causes the fetches looks a simple and less
>>> dangerous. Just makes the reload button same as clicking on the url bar
>> and
>>> pressing enter again.
>>>
>>
>> If this were a feature only of use to browsers. I would agree with you.
>>
>> But its also of potential use to shared/middleware caches for the same
>> purpose of reducing revalidations. Which means header solutions are much
>> preferrable over payload ones.
>>
>>
> Yeah, I agree.
> 
> What I don't see is how to easily prevent the potential issues if a url
> that is not versioned is marked as static by a hack/mistake. The
> implications are really dangerous. The idea proposed of limiting to
> sub-resources does not apply well for non-browsers and middleware. What
> should be the behavior for non browsers in that case? How do you plan to
> know in middleware/non-browser clients when things are / aren't
> subresources?

It doesn't matter. We trust the origin servers to represent their
objects properly. If in doubt the client is trusted to force-reload
appropriately.

If an origin gets hacked and the defaced objects cached, they have the
options of a) distributing a JS that performs a request for the fixed
URL with Cache-Control:max-age=0,no-cache automatically without users
manual involvement, or b) changing references to another URL.

The (a) methodology above is made possible by the proposed semantics
that "static" means clients always do a full fetch to replace the
content whole instead of revalidating what they have.

> 
> If there is a non-browser client that is revalidating content, it's
> probably because is configured to do so, it can be changed to not do so if
> someone really wants that. There is a must-revalidate header that they
> should be obeying, and in theory if its not set and ttl of cache not
> expired, they should not be doing those revalidations.

In the absence of explicit expiry or revalidation requirements
heuristics come into effect. That may place any kind of short lifetime
on objects. As I discussed with Ben at the beginning 1 year is the max
Squid allows currently, and 1 week is the default etc.

When that lifetime runs out revalidation is what happens today. With
static as proposed by Ben that lifetime would either never run out, or
if it did a full-fetch would happen instead of revalidate.

> 
> That's why I think that the problem with revalidations mostly affects to
> browsers, and why my proposal makes sense.
> 

Mostly, yes.  Always/Only, no.

Amos

Received on Saturday, 18 July 2015 05:28:13 UTC