Re: draft-ietf-httpbis-p6-cache-06

It's not dependable in the sense that authors can rely upon *all*  
cached copies of a response being invalidated all of the time (and a  
note to this effect would probably be worthwhile).

However, there is still very much a point; for many kinds of data, the  
most relevant cached copies are those in between the user who made a  
change (with a POST, PUT or whatever) and the origin server; e.g., if  
you submit a blog comment, it's important that you see the change  
quickly, less so that someone halfway around the world see the change  
as quickly.

As far as going through different proxies -- that's largely an  
implementation and deployment issue. With HTCP <http://www.rfc-editor.org/rfc/rfc2756.txt 
 >, you can in fact coordinate such invalidations between peered  
proxies, and in fact my employer has just finished funding changes to  
Squid to ensure that this happens correctly, because we use have uses  
for this type of deployment.

It definitely isn't perfectly reliable, but it is very often good  
enough.

Cheers,


On 08/06/2009, at 9:21 AM, Jamie Lokier wrote:

> Mark Nottingham wrote:
>> On 27/05/2009, at 3:17 AM, Adrien de Croy wrote:
>>> Wrt POST (or any method).  If the response to a POST is marked
>>> explicitly by the origin server as cachable, why should a subsequent
>>> POST invalidate that contrary to other Cache-control directives?
>>> Surely this should only apply if the original method was not POST?
>>
>> Because POST changes state on the server; it's a useful pattern to
>> have POST (or other responses) cached, but invalidated upon a visible
>> update.
>
> But it's unreliable, because requests can go via different proxies.
>
> What is the point in mandating that proxies support an unreliable
> mechanism, which suggests (wrongly) to server authors that they can
> depend on it, when there are good reliable caching mechanisms in the
> spec already?
>
> -- Jamie


--
Mark Nottingham     http://www.mnot.net/

Received on Sunday, 7 June 2009 23:28:56 UTC